Search Results: "gor"

20 December 2023

Ulrike Uhlig: How volunteer work in F/LOSS exacerbates pre-existing lines of oppression, and what that has to do with low diversity

This is a post I wrote in June 2022, but did not publish back then. After first publishing it in December 2023, a perfectionist insecure part of me unpublished it again. After receiving positive feedback, i slightly amended and republish it now. In this post, I talk about unpaid work in F/LOSS, taking on the example of hackathons, and why, in my opinion, the expectation of volunteer work is hurting diversity. Disclaimer: I don t have all the answers, only some ideas and questions.

Previous findings In 2006, the Flosspols survey searched to explain the role of gender in free/libre/open source software (F/LOSS) communities because an earlier [study] revealed a significant discrepancy in the proportion of men to women. It showed that just about 1.5% of F/LOSS community members were female at that time, compared with 28% in proprietary software (which is also a low number). Their key findings were, to name just a few:
  • that F/LOSS rewards the producing code rather than the producing software. It thereby puts most emphasis on a particular skill set. Other activities such as interface design or documentation are understood as less technical and therefore less prestigious.
  • The reliance on long hours of intensive computing in writing successful code means that men, who in general assume that time outside of waged labour is theirs , are freer to participate than women, who normally still assume a disproportionate amount of domestic responsibilities. Female F/LOSS participants, however, seem to be able to allocate a disproportionate larger share of their leisure time for their F/LOSS activities. This gives an indication that women who are not able to spend as much time on voluntary activities have difficulties to integrate into the community.
We also know from the 2016 Debian survey, published in 2021, that a majority of Debian contributors are employed, rather than being contractors, and rather than being students. Also, 95.5% of respondents to that study were men between the ages of 30 and 49, highly educated, with the largest groups coming from Germany, France, USA, and the UK. The study found that only 20% of the respondents were being paid to work on Debian. Half of these 20% estimate that the amount of work on Debian they are being paid for corresponds to less than 20% of the work they do there. On the other side, there are 14% of those who are being paid for Debian work who declared that 80-100% of the work they do in Debian is remunerated.

So, if a majority of people is not paid, why do they work on F/LOSS? Or: What are the incentives of free software? In 2021, Louis-Philippe V ronneau aka Pollo, who is not only a Debian Developer but also an economist, published his thesis What are the incentive structures of free software (The actual thesis was written in French). One very interesting finding Pollo pointed out is this one:
Indeed, while we have proven that there is a strong and significative correlation between the income and the participation in a free/libre software project, it is not possible for us to pronounce ourselves about the causality of this link.
In the French original text:
En effet, si nous avons prouv qu il existe une corr lation forte et significative entre le salaire et la participation un projet libre, il ne nous est pas possible de nous prononcer sur la causalit de ce lien.
Said differently, it is certain that there is a relationship between income and F/LOSS contribution, but it s unclear whether working on free/libre software ultimately helps finding a well paid job, or if having a well paid job is the cause enabling work on free/libre software. I would like to scratch this question a bit further, mostly relying on my own observations, experiences, and discussions with F/LOSS contributors.

Volunteer work is unpaid work We often hear of hackathons, hack weeks, or hackfests. I ve been at some such events myself, Tails organized one, the IETF regularly organizes hackathons, and last week (June 2022!) I saw an invitation for a hack week with the Torproject. This type of event generally last several days. While the people who organize these events are being paid by the organizations they work for, participants on the other hand are generally joining on a volunteer basis. Who can we expect to show up at this type of event under these circumstances as participants? To answer this question, I collected some ideas:
  • people who have an employer sponsoring their work
  • people who have a funder/grant sponsoring their work
  • people who have a high income and can take time off easily (in that regard, remember the Gender Pay Gap, women often earn less for the same work than men)
  • people who rely on family wealth (living off an inheritance, living on rights payments from a famous grandparent - I m not making these situations up, there are actual people in such financially favorable situations )
  • people who don t need much money because they don t have to pay rent or pay low rent (besides house owners that category includes people who live in squats or have social welfare paying for their rent, people who live with parents or caretakers)
  • people who don t need to do care work (for children, elderly family members, pets. Remember that most care work is still done by women.)
  • students who have financial support or are in a situation in which they do not yet need to generate a lot of income
  • people who otherwise have free time at their disposal
So, who, in your opinion, fits these unwritten requirements? Looking at this list, it s pretty clear to me why we d mostly find white men from the Global North, generally with higher education in hackathons and F/LOSS development. ( Great, they re a culture fit! ) Yes, there will also always be some people of marginalized groups who will attend such events because they expect to network, to find an internship, to find a better job in the future, or to add their participation to their curriculum. To me, this rings a bunch of alarm bells.

Low diversity in F/LOSS projects a mirror of the distribution of wealth I believe that the lack of diversity in F/LOSS is first of all a mirror of the distribution of wealth on a larger level. And by wealth I m referring to financial wealth as much as to social wealth in the sense of Bourdieu: Families of highly educated parents socially reproducing privilege by allowing their kids to attend better schools, supporting and guiding them in their choices of study and work, providing them with relations to internships acting as springboards into well paid jobs and so on. That said, we should ask ourselves as well:

Do F/LOSS projects exacerbate existing lines of oppression by relying on unpaid work? Let s look again at the causality question of Pollo s research (in my words):
It is unclear whether working on free/libre software ultimately helps finding a well paid job, or if having a well paid job is the cause enabling work on free/libre software.
Maybe we need to imagine this cause-effect relationship over time: as a student, without children and lots of free time, hopefully some money from the state or the family, people can spend time on F/LOSS, collect experience, earn recognition - and later find a well-paid job and make unpaid F/LOSS contributions into a hobby, cementing their status in the community, while at the same time generating a sense of well-being from working on the common good. This is a quite common scenario. As the Flosspols study revealed however, boys often get their own computer at the age of 14, while girls get one only at the age of 20. (These numbers might be slightly different now, and possibly many people don t own an actual laptop or desktop computer anymore, instead they own mobile devices which are not exactly inciting them to look behind the surface, take apart, learn, appropriate technology.) In any case, the above scenario does not allow for people who join F/LOSS later in life, eg. changing careers, to find their place. I believe that F/LOSS projects cannot expect to have more women, people of color, people from working class backgrounds, people from outside of Germany, France, USA, UK, Australia, and Canada on board as long as volunteer work is the status quo and waged labour an earned privilege.

Wait, are you criticizing all these wonderful people who sacrifice their free time to work towards common good? No, that s definitely not my intention, I m glad that F/LOSS exists, and the F/LOSS ecosystem has always represented a small utopia to me that is worth cherishing and nurturing. However, I think we still need to talk more about the lack of diversity, and investigate it further.

Some types of work are never being paid Besides free work at hacking events, let me also underline that a lot of work in F/LOSS is not considered payable work (yes, that s an oxymoron!). Which F/LOSS project for example, has ever paid translators a decent fee? Which project has ever considered that doing the social glue work, often done by women in the projects, is work that should be paid for? Which F/LOSS projects pay the people who do their Debian packaging rather than relying on yet another already well-paid white man who can afford doing this work for free all the while holding up how great the F/LOSS ecosystem is? And how many people on opensourcedesign jobs are looking to get their logo or website done for free? (Isn t that heart icon appealing to your altruistic empathy?) In my experience even F/LOSS projects which are trying to do the right thing by paying everyone the same amount of money per hour run into issues when it turns out that not all hours are equal and that some types of work do not qualify for remuneration at all or that the rules for the clocking of work are not universally applied in the same way by everyone.

Not every interaction should have a monetary value, but Some of you want to keep working without being paid, because that feels a bit like communism within capitalism, it makes you feel good to contribute to the greater good while not having the system determine your value over money. I hear you. I ve been there (and sometimes still am). But as long as we live in this system, even though we didn t choose to and maybe even despise it - communism is not about working for free, it s about getting paid equally and adequately. We may not think about it while under the age of 40 or 45, but working without adequate financial compensation, even half of the time, will ultimately result in not being able to care for oneself when sick, when old. And while this may not be an issue for people who inherit wealth, or have an otherwise safe economical background, eg. an academic salary, it is a huge problem and barrier for many people coming out of the working or service classes. (Oh and please, don t repeat the neoliberal lie that everyone can achieve whatever they aim for, if they just tried hard enough. French research shows that (in France) one has only 30% chance to become a class defector , and change social class upwards. But I managed to get out and move up, so everyone can! - well, if you believe that I m afraid you might be experiencing survivor bias.)

Not all bodies are equally able We should also be aware that not all of us can work with the same amount of energy either. There is yet another category of people who are excluded by the expectation of volunteer work, either because the waged labour they do already eats all of their energy, or because their bodies are not disposed to do that much work, for example because of mental health issues - such as depression-, or because of physical disabilities.

When organizing events relying on volunteer work please think about these things. Yes, you can tell people that they should ask their employer to pay them for attending a hackathon - but, as I ve hopefully shown, that would not do it for many people, especially newcomers. Instead, you could propose a fund to make it possible that people who would not normally attend can attend. DebConf is a good example for having done this for many years.

Conclusively I would like to urge free software projects that have a budget and directly pay some people from it to map where they rely on volunteer work and how this hurts diversity in their project. How do you or your project exacerbate pre-existing lines of oppression by granting or not granting monetary value to certain types of work? What is it that you take for granted? As always, I m curious about your feedback!

Worth a read These ideas are far from being new. Ashe Dryden s well-researched post The ethics of unpaid labor and the OSS community dates back to 2013 and is as important as it was ten years ago.

14 December 2023

Arturo Borrero Gonz lez: OpenTofu: handcrafted include-file mechanism with YAML

Post logo I recently started playing with Terraform/OpenTofu almost on a daily basis. The other day I was working with Amazon Managed Prometheus (or AMP), and wanted to define prometheus alert rules on YAML files. I decided that I needed a way to put the alerts on a bunch of files, and then load them by the declarative code, on the correct AMP workspace. I came up with this code pattern that I m sharing here, for my future reference, and in case it is interesting to someone else. The YAML file where I specify the AMP workspace, and where the alert rule files live:
---
alert_files:
  my_alerts_production:
    amp:
      workspace: "production"
    files: "alert_rules/production/*.yaml"
  my_alerts_staging:
    amp:
      workspace: "staging"
    files: "alert_rules/staging/*.yaml"
Note the files entry contains a file pattern. I will later expand the pattern using the fileset() function. Each rule file would be something like this:
---
name: "my_rule_namespace"
rule_data:  
  # this is prometheus-specific config
  groups:
    - name: "example_alert_group"
      rules:
      - alert: Example_Alert_Cpu
        # just arbitrary values, to produce an example alert
        expr: avg(rate(ecs_cpu_seconds_total container=~"something" [2m])) > 1
        for: 10s
        annotations:
          summary: "CPU usage is too high"
          description: "The container average CPU usage is too high."
I m interested in the data structure mutating into something similar to this:
---
alert_files:
  my_alerts_production:
    amp:
      workspace: "production"
    alerts_data:
      - name: rule_namespace_1
        rule_data:  
          # actual alert definition here
          [..]
      - name: rule_namespace_2
        rule_data:  
          # actual alert definition here
          [..]
  my_alerts_staging:
    amp:
      workspace: "staging"
    alerts_data:
      - name: rule_namespace_1
        rule_data:  
          # actual alert definition here
          [..]
      - name: rule_namespace_2
        rule_data:  
          # actual alert definition here
          [..]
This is the algorithm that does the trick:
locals  
  alerts_config =  
    for x, y in  
      for k, v in local.config.alert_files :
      k =>  
        amp : (v.amp),
        files : fileset("", v.files)
       
        : x =>  
      amp : (y.amp),
      alertmanager_data : [
        for z in(y.files) :
        yamldecode(file(z))
      ]
     
   
 
Because the declarative nature of the Terraform/OpenTofu language, I needed to implement 3 different for loops. Each loop reads the map and transforms it in some way, passing the result into the next loop. A bit convoluted if you ask me. To explain the logic, I think it makes more sense to read it from inside out. First loop:
    for k, v in local.config.alert_files :
    k =>  
        amp : (v.amp),
        files : fileset("", v.files)
     
This loop iterates the input YAML map in key-value pairs, remapping each amp entry, and expanding the file globs using the fileset() into a temporal files entry. Second loop:
    for x, y in  
        # previous fileset() loop
        : x =>  
      amp : (y.amp),
      alertmanager_data : [
        # yamldecode() loop
      ]
     
This intermediate loop is responsible for building the final data structure. It iterates the previous fileset() loop to remap it calling the next loop, the yamldecode() one. Note how the amp entry is being rebuilt in each remap (first loop and this one), otherwise we would lose it! Third loop:
    alertmanager_data : [
        for z in(y.files) :
        yamldecode(file(z))
    ]
And finally, this is maybe the easiest loop of the 3, we iterate the temporal file entry that was created in the first loop, calling yamldecode() for each of the file names generated by fileset(). The resulting data structure should allow you to easily create resources later in a for_each loop.

9 December 2023

Simon Josefsson: Classic McEliece goes to IETF and OpenSSH

My earlier work on Streamlined NTRU Prime has been progressing along. The IETF document on sntrup761 in SSH has passed several process points. GnuPG s libgcrypt has added support for sntrup761. The libssh support for sntrup761 is working, but the merge request is stuck mostly due to lack of time to debug why the regression test suite sporadically errors out in non-sntrup761 related parts with the patch. The foundation for lattice-based post-quantum algorithms has some uncertainty around it, and I have felt that there is more to the post-quantum story than adding sntrup761 to implementations. Classic McEliece has been mentioned to me a couple of times, and I took some time to learn it and did a cut n paste job of the proposed ISO standard and published draft-josefsson-mceliece in the IETF to make the algorithm easily available to the IETF community. A high-quality implementation of Classic McEliece has been published as libmceliece and I ve been supporting the work of Jan Moj to package libmceliece for Debian, alas it has been stuck in the ftp-master NEW queue for manual review for over two months. The pre-dependencies librandombytes and libcpucycles are available in Debian already. All that text writing and packaging work set the scene to write some code. When I added support for sntrup761 in libssh, I became familiar with the OpenSSH code base, so it was natural to return to OpenSSH to experiment with a new SSH KEX for Classic McEliece. DJB suggested to pick mceliece6688128 and combine it with the existing X25519+sntrup761 or with plain X25519. While a three-algorithm hybrid between X25519, sntrup761 and mceliece6688128 would be a simple drop-in for those that don t want to lose the benefits offered by sntrup761, I decided to start the journey on a pure combination of X25519 with mceliece6688128. The key combiner in sntrup761x25519 is a simple SHA512 call and the only good I can say about that is that it is simple to describe and implement, and doesn t raise too many questions since it is already deployed. After procrastinating coding for months, once I sat down to work it only took a couple of hours until I had a successful Classic McEliece SSH connection. I suppose my brain had sorted everything in background before I started. To reproduce it, please try the following in a Debian testing environment (I use podman to get a clean environment).
# podman run -it --rm debian:testing-slim
apt update
apt dist-upgrade -y
apt install -y wget python3 librandombytes-dev libcpucycles-dev gcc make git autoconf libz-dev libssl-dev
cd ~
wget -q -O- https://lib.mceliece.org/libmceliece-20230612.tar.gz   tar xfz -
cd libmceliece-20230612/
./configure
make install
ldconfig
cd ..
git clone https://gitlab.com/jas/openssh-portable
cd openssh-portable
git checkout jas/mceliece
autoreconf
./configure # verify 'libmceliece support: yes'
make # CC="cc -DDEBUG_KEX=1 -DDEBUG_KEXDH=1 -DDEBUG_KEXECDH=1"
You should now have a working SSH client and server that supports Classic McEliece! Verify support by running ./ssh -Q kex and it should mention mceliece6688128x25519-sha512@openssh.com. To have it print plenty of debug outputs, you may remove the # character on the final line, but don t use such a build in production. You can test it as follows:
./ssh-keygen -A # writes to /usr/local/etc/ssh_host_...
# setup public-key based login by running the following:
./ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""
cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
adduser --system sshd
mkdir /var/empty
while true; do $PWD/sshd -p 2222 -f /dev/null; done &
./ssh -v -p 2222 localhost -oKexAlgorithms=mceliece6688128x25519-sha512@openssh.com date
On the client you should see output like this:
OpenSSH_9.5p1, OpenSSL 3.0.11 19 Sep 2023
...
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: mceliece6688128x25519-sha512@openssh.com
debug1: kex: host key algorithm: ssh-ed25519
debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: SSH2_MSG_KEX_ECDH_REPLY received
debug1: Server host key: ssh-ed25519 SHA256:YognhWY7+399J+/V8eAQWmM3UFDLT0dkmoj3pIJ0zXs
...
debug1: Host '[localhost]:2222' is known and matches the ED25519 host key.
debug1: Found key in /root/.ssh/known_hosts:1
debug1: rekey out after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey in after 134217728 blocks
...
debug1: Sending command: date
debug1: pledge: fork
debug1: permanently_set_uid: 0/0
Environment:
  USER=root
  LOGNAME=root
  HOME=/root
  PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
  MAIL=/var/mail/root
  SHELL=/bin/bash
  SSH_CLIENT=::1 46894 2222
  SSH_CONNECTION=::1 46894 ::1 2222
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: client_input_channel_req: channel 0 rtype eow@openssh.com reply 0
Sat Dec  9 22:22:40 UTC 2023
debug1: channel 0: free: client-session, nchannels 1
Transferred: sent 1048044, received 3500 bytes, in 0.0 seconds
Bytes per second: sent 23388935.4, received 78108.6
debug1: Exit status 0
Notice the kex: algorithm: mceliece6688128x25519-sha512@openssh.com output. How about network bandwidth usage? Below is a comparison of a complete SSH client connection such as the one above that log in and print date and logs out. Plain X25519 is around 7kb, X25519 with sntrup761 is around 9kb, and mceliece6688128 with X25519 is around 1MB. Yes, Classic McEliece has large keys, but for many environments, 1MB of data for the session establishment will barely be noticeable.
./ssh -v -p 2222 localhost -oKexAlgorithms=curve25519-sha256 date 2>&1   grep ^Transferred
Transferred: sent 3028, received 3612 bytes, in 0.0 seconds
./ssh -v -p 2222 localhost -oKexAlgorithms=sntrup761x25519-sha512@openssh.com date 2>&1   grep ^Transferred
Transferred: sent 4212, received 4596 bytes, in 0.0 seconds
./ssh -v -p 2222 localhost -oKexAlgorithms=mceliece6688128x25519-sha512@openssh.com date 2>&1   grep ^Transferred
Transferred: sent 1048044, received 3764 bytes, in 0.0 seconds
So how about session establishment time?
date; i=0; while test $i -le 100; do ./ssh -v -p 2222 localhost -oKexAlgorithms=curve25519-sha256 date > /dev/null 2>&1; i= expr $i + 1 ; done; date
Sat Dec  9 22:39:19 UTC 2023
Sat Dec  9 22:39:25 UTC 2023
# 6 seconds
date; i=0; while test $i -le 100; do ./ssh -v -p 2222 localhost -oKexAlgorithms=sntrup761x25519-sha512@openssh.com date > /dev/null 2>&1; i= expr $i + 1 ; done; date
Sat Dec  9 22:39:29 UTC 2023
Sat Dec  9 22:39:38 UTC 2023
# 9 seconds
date; i=0; while test $i -le 100; do ./ssh -v -p 2222 localhost -oKexAlgorithms=mceliece6688128x25519-sha512@openssh.com date > /dev/null 2>&1; i= expr $i + 1 ; done; date
Sat Dec  9 22:39:55 UTC 2023
Sat Dec  9 22:40:07 UTC 2023
# 12 seconds
I never noticed adding sntrup761, so I m pretty sure I wouldn t notice this increase either. This is all running on my laptop that runs Trisquel so take it with a grain of salt but at least the magnitude is clear. Future work items include: Happy post-quantum SSH ing! Update: Changing the mceliece6688128_keypair call to mceliece6688128f_keypair (i.e., using the fully compatible f-variant) results in McEliece being just as fast as sntrup761 on my machine. Update 2023-12-26: An initial IETF document draft-josefsson-ssh-mceliece-00 published.

7 December 2023

Daniel Kahn Gillmor: New OpenPGP certificate for dkg, December 2023

dkg's New OpenPGP certificate in December 2023 In December of 2023, I'm moving to a new OpenPGP certificate. You might know my old OpenPGP certificate, which had an fingerprint of C29F8A0C01F35E34D816AA5CE092EB3A5CA10DBA. My new OpenPGP certificate has a fingerprint of: D477040C70C2156A5C298549BB7E9101495E6BF7. Both certificates have the same set of User IDs:
  • Daniel Kahn Gillmor
  • <dkg@debian.org>
  • <dkg@fifthhorseman.net>
You can find a version of this transition statement signed by both the old and new certificates at: https://dkg.fifthhorseman.net/2023-dkg-openpgp-transition.txt The new OpenPGP certificate is:
-----BEGIN PGP PUBLIC KEY BLOCK-----
xjMEZXEJyxYJKwYBBAHaRw8BAQdA5BpbW0bpl5qCng/RiqwhQINrplDMSS5JsO/Y
O+5Zi7HCwAsEHxYKAH0FgmVxCcsDCwkHCRC7fpEBSV5r90cUAAAAAAAeACBzYWx0
QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmfUAgfN9tyTSxpxhmHA1r63GiI4v6NQ
mrrWVLOBRJYuhQMVCggCmwECHgEWIQTUdwQMcMIValwphUm7fpEBSV5r9wAAmaEA
/3MvYJMxQdLhIG4UDNMVd2bsovwdcTrReJhLYyFulBrwAQD/j/RS+AXQIVtkcO9b
l6zZTAO9x6yfkOZbv0g3eNyrAs0QPGRrZ0BkZWJpYW4ub3JnPsLACwQTFgoAfQWC
ZXEJywMLCQcJELt+kQFJXmv3RxQAAAAAAB4AIHNhbHRAbm90YXRpb25zLnNlcXVv
aWEtcGdwLm9yZ4l+Z3i19Uwjw3CfTNFCDjRsoufMoPOM7vM8HoOEdn/vAxUKCAKb
AQIeARYhBNR3BAxwwhVqXCmFSbt+kQFJXmv3AAALZQEAhJsgouepQVV98BHUH6Sv
WvcKrb8dQEZOvHFbZQQPNWgA/A/DHkjYKnUkCg8Zc+FonqOS/35sHhNA8CwqSQFr
tN4KzRc8ZGtnQGZpZnRoaG9yc2VtYW4ubmV0PsLACgQTFgoAfQWCZXEJywMLCQcJ
ELt+kQFJXmv3RxQAAAAAAB4AIHNhbHRAbm90YXRpb25zLnNlcXVvaWEtcGdwLm9y
ZxLvwkgnslsAuo+IoSa9rv8+nXpbBdab2Ft7n4H9S+d/AxUKCAKbAQIeARYhBNR3
BAxwwhVqXCmFSbt+kQFJXmv3AAAtFgD4wqcUfQl7nGLQOcAEHhx8V0Bg8v9ov8Gs
Y1ei1BEFwAD/cxmxmDSO0/tA+x4pd5yIvzgfGYHSTxKS0Ww3hzjuZA7NE0Rhbmll
bCBLYWhuIEdpbGxtb3LCwA4EExYKAIAFgmVxCcsDCwkHCRC7fpEBSV5r90cUAAAA
AAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmd7X4TgiINwnzh4jar0
Pf/b5hgxFPngCFxJSmtr/f0YiQMVCggCmQECmwECHgEWIQTUdwQMcMIValwphUm7
fpEBSV5r9wAAMuwBAPtMonKbhGOhOy+8miAb/knJ1cIPBjLupJbjM+NUE1WyAQD1
nyGW+XwwMrprMwc320mdJH9B0jdokJZBiN7++0NoBM4zBGVxCcsWCSsGAQQB2kcP
AQEHQI19uRatkPSFBXh8usgciEDwZxTnnRZYrhIgiFMybBDQwsC/BBgWCgExBYJl
cQnLCRC7fpEBSV5r90cUAAAAAAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBn
cC5vcmfCopazDnq6hZUsgVyztl5wmDCmxI169YLNu+IpDzJEtQKbAr6gBBkWCgBv
BYJlcQnLCRB3LRYeNc1LgUcUAAAAAAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lh
LXBncC5vcmcQglI7G7DbL9QmaDkzcEuk3QliM4NmleIRUW7VvIBHMxYhBHS8BMQ9
hghL6GcsBnctFh41zUuBAACwfwEAqDULksr8PulKRcIP6N9NI/4KoznyIcuOHi8q
Gk4qxMkBAIeV20SPEnWSw9MWAb0eKEcfupzr/C+8vDvsRMynCWsDFiEE1HcEDHDC
FWpcKYVJu36RAUlea/cAAFD1AP0YsE3Eeig1tkWaeyrvvMf5Kl1tt2LekTNWDnB+
FUG9SgD+Ka8vfPR8wuV8D3y5Y9Qq9xGO+QkEBCW0U1qNypg65QHOOARlcQnLEgor
BgEEAZdVAQUBAQdAWTLEa0WmnhUmDBdWXX0ZlYAa4g1CK/fXg0NPOQSteA4DAQgH
wsAABBgWCgByBYJlcQnLCRC7fpEBSV5r90cUAAAAAAAeACBzYWx0QG5vdGF0aW9u
cy5zZXF1b2lhLXBncC5vcmexrMBZe0QdQ+ZJOZxFkAiwCw2I7yTSF2Ox9GVFWKmA
mAKbDBYhBNR3BAxwwhVqXCmFSbt+kQFJXmv3AABcJQD/f4ltpSvLBOBEh/C2dIYa
dgSuqkCqq0B4WOhFRkWJZlcA/AxqLWG4o8UrrmwrmM42FhgxKtEXwCSHE00u8wR4
Up8G
=9Yc8
-----END PGP PUBLIC KEY BLOCK-----
When I have some reasonable number of certifications, i'll update the certificate associated with my e-mail addresses on https://keys.openpgp.org, in DANE, and in WKD. Until then, those lookups should continue to provide the old certificate.

5 December 2023

Dirk Eddelbuettel: RcppArmadillo 0.12.6.6.1 on CRAN: No More Deprecation

armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 1126 other packages on CRAN, downloaded 31.7 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 569 times according to Google Scholar. This release ends the practice on asking Armadillo to suppress deprecation warnings. RcppArmadillo, as noted, has a large user base. Sometimes Conrad sometimes made changes without too much of a heads-up so at times it was opportune to not bring those warnings to dozens (or maybe hundreds) of packages at CRAN. Yet we need to balance this with the demonstrable need to call out older deprecated code use. So sixteen months ago, with GitHub issue #391, we started to alert author of 30+ affected packages and supplied either pull requests or emailed patches to all. Eleven months ago GitHub issues #402 was added for a second deprecation. And the time of making the switch has come. Release 0.12.6.6.1 no longer defines ARMA_IGNORE_DEPRECATED_MARKER. So among the over 1100 packages using RcppArmadillo at CRAN, around a good dozen or so were flagged in the upload but CRAN concurred and let the package migrate to CRAN. If you maintain an affected package, consider applying the patch or pull request now. A simple stop-gap measure also exists by adding -DARMA_IGNORE_DEPRECATED_MARKER to src/Makevars as either PKG_CPPFLAGS or PKG_CXXFLAGS to reactivate it. But a proper code update, which is generally simple, may be better. If you are unsure, do not hesitate to get in touch. The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.6.6.1 (2023-12-03)
  • Following the extendeded transition in #391 and #402, this release no longer sets ARMA_IGNORE_DEPRECATED_MARKER. Maintainers of affected packages have received pull requests or patches and can set -DARMA_IGNORE_DEPRECATED_MARKER as PKG_CPPFLAGS.

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 November 2023

Dirk Eddelbuettel: RcppSimdJson 0.1.11 on CRAN: Maintenance

A new maintenance release 0.1.11 of the RcppSimdJson package is now on CRAN. RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is faster than CPU speed as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon. This release responds to a CRAN request to address issues now identified by -Wformat -Wformat-security. These are frequently pretty simple changes as it was here: all it took was an call to compileAttributes() from an updated Rcpp version which now injects "%s" as a format string when calling Rf_error(). The (very short) NEWS entry for this release follows.

Changes in version 0.1.11 (2023-11-28)
  • RcppExports.cpp has been regenerated under an update Rcpp to address a print format warning (Dirk in #88).

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 November 2023

Mike Hommey: How I (kind of) killed Mercurial at Mozilla

Did you hear the news? Firefox development is moving from Mercurial to Git. While the decision is far from being mine, and I was barely involved in the small incremental changes that ultimately led to this decision, I feel I have to take at least some responsibility. And if you are one of those who would rather use Mercurial than Git, you may direct all your ire at me. But let's take a step back and review the past 25 years leading to this decision. You'll forgive me for skipping some details and any possible inaccuracies. This is already a long post, while I could have been more thorough, even I think that would have been too much. This is also not an official Mozilla position, only my personal perception and recollection as someone who was involved at times, but mostly an observer from a distance. From CVS to DVCS From its release in 1998, the Mozilla source code was kept in a CVS repository. If you're too young to know what CVS is, let's just say it's an old school version control system, with its set of problems. Back then, it was mostly ubiquitous in the Open Source world, as far as I remember. In the early 2000s, the Subversion version control system gained some traction, solving some of the problems that came with CVS. Incidentally, Subversion was created by Jim Blandy, who now works at Mozilla on completely unrelated matters. In the same period, the Linux kernel development moved from CVS to Bitkeeper, which was more suitable to the distributed nature of the Linux community. BitKeeper had its own problem, though: it was the opposite of Open Source, but for most pragmatic people, it wasn't a real concern because free access was provided. Until it became a problem: someone at OSDL developed an alternative client to BitKeeper, and licenses of BitKeeper were rescinded for OSDL members, including Linus Torvalds (they were even prohibited from purchasing one). Following this fiasco, in April 2005, two weeks from each other, both Git and Mercurial were born. The former was created by Linus Torvalds himself, while the latter was developed by Olivia Mackall, who was a Linux kernel developer back then. And because they both came out of the same community for the same needs, and the same shared experience with BitKeeper, they both were similar distributed version control systems. Interestingly enough, several other DVCSes existed: In this landscape, the major difference Git was making at the time was that it was blazing fast. Almost incredibly so, at least on Linux systems. That was less true on other platforms (especially Windows). It was a game-changer for handling large codebases in a smooth manner. Anyways, two years later, in 2007, Mozilla decided to move its source code not to Bzr, not to Git, not to Subversion (which, yes, was a contender), but to Mercurial. The decision "process" was laid down in two rather colorful blog posts. My memory is a bit fuzzy, but I don't recall that it was a particularly controversial choice. All of those DVCSes were still young, and there was no definite "winner" yet (GitHub hadn't even been founded). It made the most sense for Mozilla back then, mainly because the Git experience on Windows still wasn't there, and that mattered a lot for Mozilla, with its diverse platform support. As a contributor, I didn't think much of it, although to be fair, at the time, I was mostly consuming the source tarballs. Personal preferences Digging through my archives, I've unearthed a forgotten chapter: I did end up setting up both a Mercurial and a Git mirror of the Firefox source repository on alioth.debian.org. Alioth.debian.org was a FusionForge-based collaboration system for Debian developers, similar to SourceForge. It was the ancestor of salsa.debian.org. I used those mirrors for the Debian packaging of Firefox (cough cough Iceweasel). The Git mirror was created with hg-fast-export, and the Mercurial mirror was only a necessary step in the process. By that time, I had converted my Subversion repositories to Git, and switched off SVK. Incidentally, I started contributing to Git around that time as well. I apparently did this not too long after Mozilla switched to Mercurial. As a Linux user, I think I just wanted the speed that Mercurial was not providing. Not that Mercurial was that slow, but the difference between a couple seconds and a couple hundred milliseconds was a significant enough difference in user experience for me to prefer Git (and Firefox was not the only thing I was using version control for) Other people had also similarly created their own mirror, or with other tools. But none of them were "compatible": their commit hashes were different. Hg-git, used by the latter, was putting extra information in commit messages that would make the conversion differ, and hg-fast-export would just not be consistent with itself! My mirror is long gone, and those have not been updated in more than a decade. I did end up using Mercurial, when I got commit access to the Firefox source repository in April 2010. I still kept using Git for my Debian activities, but I now was also using Mercurial to push to the Mozilla servers. I joined Mozilla as a contractor a few months after that, and kept using Mercurial for a while, but as a, by then, long time Git user, it never really clicked for me. It turns out, the sentiment was shared by several at Mozilla. Git incursion In the early 2010s, GitHub was becoming ubiquitous, and the Git mindshare was getting large. Multiple projects at Mozilla were already entirely hosted on GitHub. As for the Firefox source code base, Mozilla back then was kind of a Wild West, and engineers being engineers, multiple people had been using Git, with their own inconvenient workflows involving a local Mercurial clone. The most popular set of scripts was moz-git-tools, to incorporate changes in a local Git repository into the local Mercurial copy, to then send to Mozilla servers. In terms of the number of people doing that, though, I don't think it was a lot of people, probably a few handfuls. On my end, I was still keeping up with Mercurial. I think at that time several engineers had their own unofficial Git mirrors on GitHub, and later on Ehsan Akhgari provided another mirror, with a twist: it also contained the full CVS history, which the canonical Mercurial repository didn't have. This was particularly interesting for engineers who needed to do some code archeology and couldn't get past the 2007 cutoff of the Mercurial repository. I think that mirror ultimately became the official-looking, but really unofficial, mozilla-central repository on GitHub. On a side note, a Mercurial repository containing the CVS history was also later set up, but that didn't lead to something officially supported on the Mercurial side. Some time around 2011~2012, I started to more seriously consider using Git for work myself, but wasn't satisfied with the workflows others had set up for themselves. I really didn't like the idea of wasting extra disk space keeping a Mercurial clone around while using a Git mirror. I wrote a Python script that would use Mercurial as a library to access a remote repository and produce a git-fast-import stream. That would allow the creation of a git repository without a local Mercurial clone. It worked quite well, but it was not able to incrementally update. Other, more complete tools existed already, some of which I mentioned above. But as time was passing and the size and depth of the Mercurial repository was growing, these tools were showing their limits and were too slow for my taste, especially for the initial clone. Boot to Git In the same time frame, Mozilla ventured in the Mobile OS sphere with Boot to Gecko, later known as Firefox OS. What does that have to do with version control? The needs of third party collaborators in the mobile space led to the creation of what is now the gecko-dev repository on GitHub. As I remember it, it was challenging to create, but once it was there, Git users could just clone it and have a working, up-to-date local copy of the Firefox source code and its history... which they could already have, but this was the first officially supported way of doing so. Coincidentally, Ehsan's unofficial mirror was having trouble (to the point of GitHub closing the repository) and was ultimately shut down in December 2013. You'll often find comments on the interwebs about how GitHub has become unreliable since the Microsoft acquisition. I can't really comment on that, but if you think GitHub is unreliable now, rest assured that it was worse in its beginning. And its sustainability as a platform also wasn't a given, being a rather new player. So on top of having this official mirror on GitHub, Mozilla also ventured in setting up its own Git server for greater control and reliability. But the canonical repository was still the Mercurial one, and while Git users now had a supported mirror to pull from, they still had to somehow interact with Mercurial repositories, most notably for the Try server. Git slowly creeping in Firefox build tooling Still in the same time frame, tooling around building Firefox was improving drastically. For obvious reasons, when version control integration was needed in the tooling, Mercurial support was always a no-brainer. The first explicit acknowledgement of a Git repository for the Firefox source code, other than the addition of the .gitignore file, was bug 774109. It added a script to install the prerequisites to build Firefox on macOS (still called OSX back then), and that would print a message inviting people to obtain a copy of the source code with either Mercurial or Git. That was a precursor to current bootstrap.py, from September 2012. Following that, as far as I can tell, the first real incursion of Git in the Firefox source tree tooling happened in bug 965120. A few days earlier, bug 952379 had added a mach clang-format command that would apply clang-format-diff to the output from hg diff. Obviously, running hg diff on a Git working tree didn't work, and bug 965120 was filed, and support for Git was added there. That was in January 2014. A year later, when the initial implementation of mach artifact was added (which ultimately led to artifact builds), Git users were an immediate thought. But while they were considered, it was not to support them, but to avoid actively breaking their workflows. Git support for mach artifact was eventually added 14 months later, in March 2016. From gecko-dev to git-cinnabar Let's step back a little here, back to the end of 2014. My user experience with Mercurial had reached a level of dissatisfaction that was enough for me to decide to take that script from a couple years prior and make it work for incremental updates. That meant finding a way to store enough information locally to be able to reconstruct whatever the incremental updates would be relying on (guess why other tools hid a local Mercurial clone under hood). I got something working rather quickly, and after talking to a few people about this side project at the Mozilla Portland All Hands and seeing their excitement, I published a git-remote-hg initial prototype on the last day of the All Hands. Within weeks, the prototype gained the ability to directly push to Mercurial repositories, and a couple months later, was renamed to git-cinnabar. At that point, as a Git user, instead of cloning the gecko-dev repository from GitHub and switching to a local Mercurial repository whenever you needed to push to a Mercurial repository (i.e. the aforementioned Try server, or, at the time, for reviews), you could just clone and push directly from/to Mercurial, all within Git. And it was fast too. You could get a full clone of mozilla-central in less than half an hour, when at the time, other similar tools would take more than 10 hours (needless to say, it's even worse now). Another couple months later (we're now at the end of April 2015), git-cinnabar became able to start off a local clone of the gecko-dev repository, rather than clone from scratch, which could be time consuming. But because git-cinnabar and the tool that was updating gecko-dev weren't producing the same commits, this setup was cumbersome and not really recommended. For instance, if you pushed something to mozilla-central with git-cinnabar from a gecko-dev clone, it would come back with a different commit hash in gecko-dev, and you'd have to deal with the divergence. Eventually, in April 2020, the scripts updating gecko-dev were switched to git-cinnabar, making the use of gecko-dev alongside git-cinnabar a more viable option. Ironically(?), the switch occurred to ease collaboration with KaiOS (you know, the mobile OS born from the ashes of Firefox OS). Well, okay, in all honesty, when the need of syncing in both directions between Git and Mercurial (we only had ever synced from Mercurial to Git) came up, I nudged Mozilla in the direction of git-cinnabar, which, in my (biased but still honest) opinion, was the more reliable option for two-way synchronization (we did have regular conversion problems with hg-git, nothing of the sort has happened since the switch). One Firefox repository to rule them all For reasons I don't know, Mozilla decided to use separate Mercurial repositories as "branches". With the switch to the rapid release process in 2011, that meant one repository for nightly (mozilla-central), one for aurora, one for beta, and one for release. And with the addition of Extended Support Releases in 2012, we now add a new ESR repository every year. Boot to Gecko also had its own branches, and so did Fennec (Firefox for Mobile, before Android). There are a lot of them. And then there are also integration branches, where developer's work lands before being merged in mozilla-central (or backed out if it breaks things), always leaving mozilla-central in a (hopefully) good state. Only one of them remains in use today, though. I can only suppose that the way Mercurial branches work was not deemed practical. It is worth noting, though, that Mercurial branches are used in some cases, to branch off a dot-release when the next major release process has already started, so it's not a matter of not knowing the feature exists or some such. In 2016, Gregory Szorc set up a new repository that would contain them all (or at least most of them), which eventually became what is now the mozilla-unified repository. This would e.g. simplify switching between branches when necessary. 7 years later, for some reason, the other "branches" still exist, but most developers are expected to be using mozilla-unified. Mozilla's CI also switched to using mozilla-unified as base repository. Honestly, I'm not sure why the separate repositories are still the main entry point for pushes, rather than going directly to mozilla-unified, but it probably comes down to switching being work, and not being a top priority. Also, it probably doesn't help that working with multiple heads in Mercurial, even (especially?) with bookmarks, can be a source of confusion. To give an example, if you aren't careful, and do a plain clone of the mozilla-unified repository, you may not end up on the latest mozilla-central changeset, but rather, e.g. one from beta, or some other branch, depending which one was last updated. Hosting is simple, right? Put your repository on a server, install hgweb or gitweb, and that's it? Maybe that works for... Mercurial itself, but that repository "only" has slightly over 50k changesets and less than 4k files. Mozilla-central has more than an order of magnitude more changesets (close to 700k) and two orders of magnitude more files (more than 700k if you count the deleted or moved files, 350k if you count the currently existing ones). And remember, there are a lot of "duplicates" of this repository. And I didn't even mention user repositories and project branches. Sure, it's a self-inflicted pain, and you'd think it could probably(?) be mitigated with shared repositories. But consider the simple case of two repositories: mozilla-central and autoland. You make autoland use mozilla-central as a shared repository. Now, you push something new to autoland, it's stored in the autoland datastore. Eventually, you merge to mozilla-central. Congratulations, it's now in both datastores, and you'd need to clean-up autoland if you wanted to avoid the duplication. Now, you'd think mozilla-unified would solve these issues, and it would... to some extent. Because that wouldn't cover user repositories and project branches briefly mentioned above, which in GitHub parlance would be considered as Forks. So you'd want a mega global datastore shared by all repositories, and repositories would need to only expose what they really contain. Does Mercurial support that? I don't think so (okay, I'll give you that: even if it doesn't, it could, but that's extra work). And since we're talking about a transition to Git, does Git support that? You may have read about how you can link to a commit from a fork and make-pretend that it comes from the main repository on GitHub? At least, it shows a warning, now. That's essentially the architectural reason why. So the actual answer is that Git doesn't support it out of the box, but GitHub has some backend magic to handle it somehow (and hopefully, other things like Gitea, Girocco, Gitlab, etc. have something similar). Now, to come back to the size of the repository. A repository is not a static file. It's a server with which you negotiate what you have against what it has that you want. Then the server bundles what you asked for based on what you said you have. Or in the opposite direction, you negotiate what you have that it doesn't, you send it, and the server incorporates what you sent it. Fortunately the latter is less frequent and requires authentication. But the former is more frequent and CPU intensive. Especially when pulling a large number of changesets, which, incidentally, cloning is. "But there is a solution for clones" you might say, which is true. That's clonebundles, which offload the CPU intensive part of cloning to a single job scheduled regularly. Guess who implemented it? Mozilla. But that only covers the cloning part. We actually had laid the ground to support offloading large incremental updates and split clones, but that never materialized. Even with all that, that still leaves you with a server that can display file contents, diffs, blames, provide zip archives of a revision, and more, all of which are CPU intensive in their own way. And these endpoints are regularly abused, and cause extra load to your servers, yes plural, because of course a single server won't handle the load for the number of users of your big repositories. And because your endpoints are abused, you have to close some of them. And I'm not mentioning the Try repository with its tens of thousands of heads, which brings its own sets of problems (and it would have even more heads if we didn't fake-merge them once in a while). Of course, all the above applies to Git (and it only gained support for something akin to clonebundles last year). So, when the Firefox OS project was stopped, there wasn't much motivation to continue supporting our own Git server, Mercurial still being the official point of entry, and git.mozilla.org was shut down in 2016. The growing difficulty of maintaining the status quo Slowly, but steadily in more recent years, as new tooling was added that needed some input from the source code manager, support for Git was more and more consistently added. But at the same time, as people left for other endeavors and weren't necessarily replaced, or more recently with layoffs, resources allocated to such tooling have been spread thin. Meanwhile, the repository growth didn't take a break, and the Try repository was becoming an increasing pain, with push times quite often exceeding 10 minutes. The ongoing work to move Try pushes to Lando will hide the problem under the rug, but the underlying problem will still exist (although the last version of Mercurial seems to have improved things). On the flip side, more and more people have been relying on Git for Firefox development, to my own surprise, as I didn't really push for that to happen. It just happened organically, by ways of git-cinnabar existing, providing a compelling experience to those who prefer Git, and, I guess, word of mouth. I was genuinely surprised when I recently heard the use of Git among moz-phab users had surpassed a third. I did, however, occasionally orient people who struggled with Mercurial and said they were more familiar with Git, towards git-cinnabar. I suspect there's a somewhat large number of people who never realized Git was a viable option. But that, on its own, can come with its own challenges: if you use git-cinnabar without being backed by gecko-dev, you'll have a hard time sharing your branches on GitHub, because you can't push to a fork of gecko-dev without pushing your entire local repository, as they have different commit histories. And switching to gecko-dev when you weren't already using it requires some extra work to rebase all your local branches from the old commit history to the new one. Clone times with git-cinnabar have also started to go a little out of hand in the past few years, but this was mitigated in a similar manner as with the Mercurial cloning problem: with static files that are refreshed regularly. Ironically, that made cloning with git-cinnabar faster than cloning with Mercurial. But generating those static files is increasingly time-consuming. As of writing, generating those for mozilla-unified takes close to 7 hours. I was predicting clone times over 10 hours "in 5 years" in a post from 4 years ago, I wasn't too far off. With exponential growth, it could still happen, although to be fair, CPUs have improved since. I will explore the performance aspect in a subsequent blog post, alongside the upcoming release of git-cinnabar 0.7.0-b1. I don't even want to check how long it now takes with hg-git or git-remote-hg (they were already taking more than a day when git-cinnabar was taking a couple hours). I suppose it's about time that I clarify that git-cinnabar has always been a side-project. It hasn't been part of my duties at Mozilla, and the extent to which Mozilla supports git-cinnabar is in the form of taskcluster workers on the community instance for both git-cinnabar CI and generating those clone bundles. Consequently, that makes the above git-cinnabar specific issues a Me problem, rather than a Mozilla problem. Taking the leap I can't talk for the people who made the proposal to move to Git, nor for the people who put a green light on it. But I can at least give my perspective. Developers have regularly asked why Mozilla was still using Mercurial, but I think it was the first time that a formal proposal was laid out. And it came from the Engineering Workflow team, responsible for issue tracking, code reviews, source control, build and more. It's easy to say "Mozilla should have chosen Git in the first place", but back in 2007, GitHub wasn't there, Bitbucket wasn't there, and all the available options were rather new (especially compared to the then 21 years-old CVS). I think Mozilla made the right choice, all things considered. Had they waited a couple years, the story might have been different. You might say that Mozilla stayed with Mercurial for so long because of the sunk cost fallacy. I don't think that's true either. But after the biggest Mercurial repository hosting service turned off Mercurial support, and the main contributor to Mercurial going their own way, it's hard to ignore that the landscape has evolved. And the problems that we regularly encounter with the Mercurial servers are not going to get any better as the repository continues to grow. As far as I know, all the Mercurial repositories bigger than Mozilla's are... not using Mercurial. Google has its own closed-source server, and Facebook has another of its own, and it's not really public either. With resources spread thin, I don't expect Mozilla to be able to continue supporting a Mercurial server indefinitely (although I guess Octobus could be contracted to give a hand, but is that sustainable?). Mozilla, being a champion of Open Source, also doesn't live in a silo. At some point, you have to meet your contributors where they are. And the Open Source world is now majoritarily using Git. I'm sure the vast majority of new hires at Mozilla in the past, say, 5 years, know Git and have had to learn Mercurial (although they arguably didn't need to). Even within Mozilla, with thousands(!) of repositories on GitHub, Firefox is now actually the exception rather than the norm. I should even actually say Desktop Firefox, because even Mobile Firefox lives on GitHub (although Fenix is moving back in together with Desktop Firefox, and the timing is such that that will probably happen before Firefox moves to Git). Heck, even Microsoft moved to Git! With a significant developer base already using Git thanks to git-cinnabar, and all the constraints and problems I mentioned previously, it actually seems natural that a transition (finally) happens. However, had git-cinnabar or something similarly viable not existed, I don't think Mozilla would be in a position to take this decision. On one hand, it probably wouldn't be in the current situation of having to support both Git and Mercurial in the tooling around Firefox, nor the resource constraints related to that. But on the other hand, it would be farther from supporting Git and being able to make the switch in order to address all the other problems. But... GitHub? I hope I made a compelling case that hosting is not as simple as it can seem, at the scale of the Firefox repository. It's also not Mozilla's main focus. Mozilla has enough on its plate with the migration of existing infrastructure that does rely on Mercurial to understandably not want to figure out the hosting part, especially with limited resources, and with the mixed experience hosting both Mercurial and git has been so far. After all, GitHub couldn't even display things like the contributors' graph on gecko-dev until recently, and hosting is literally their job! They still drop the ball on large blames (thankfully we have searchfox for those). Where does that leave us? Gitlab? For those criticizing GitHub for being proprietary, that's probably not open enough. Cloud Source Repositories? "But GitHub is Microsoft" is a complaint I've read a lot after the announcement. Do you think Google hosting would have appealed to these people? Bitbucket? I'm kind of surprised it wasn't in the list of providers that were considered, but I'm also kind of glad it wasn't (and I'll leave it at that). I think the only relatively big hosting provider that could have made the people criticizing the choice of GitHub happy is Codeberg, but I hadn't even heard of it before it was mentioned in response to Mozilla's announcement. But really, with literal thousands of Mozilla repositories already on GitHub, with literal tens of millions repositories on the platform overall, the pragmatic in me can't deny that it's an attractive option (and I can't stress enough that I wasn't remotely close to the room where the discussion about what choice to make happened). "But it's a slippery slope". I can see that being a real concern. LLVM also moved its repository to GitHub (from a (I think) self-hosted Subversion server), and ended up moving off Bugzilla and Phabricator to GitHub issues and PRs four years later. As an occasional contributor to LLVM, I hate this move. I hate the GitHub review UI with a passion. At least, right now, GitHub PRs are not a viable option for Mozilla, for their lack of support for security related PRs, and the more general shortcomings in the review UI. That doesn't mean things won't change in the future, but let's not get too far ahead of ourselves. The move to Git has just been announced, and the migration has not even begun yet. Just because Mozilla is moving the Firefox repository to GitHub doesn't mean it's locked in forever or that all the eggs are going to be thrown into one basket. If bridges need to be crossed in the future, we'll see then. So, what's next? The official announcement said we're not expecting the migration to really begin until six months from now. I'll swim against the current here, and say this: the earlier you can switch to git, the earlier you'll find out what works and what doesn't work for you, whether you already know Git or not. While there is not one unique workflow, here's what I would recommend anyone who wants to take the leap off Mercurial right now: As there is no one-size-fits-all workflow, I won't tell you how to organize yourself from there. I'll just say this: if you know the Mercurial sha1s of your previous local work, you can create branches for them with:
$ git branch <branch_name> $(git cinnabar hg2git <hg_sha1>)
At this point, you should have everything available on the Git side, and you can remove the .hg directory. Or move it into some empty directory somewhere else, just in case. But don't leave it here, it will only confuse the tooling. Artifact builds WILL be confused, though, and you'll have to ./mach configure before being able to do anything. You may also hit bug 1865299 if your working tree is older than this post. If you have any problem or question, you can ping me on #git-cinnabar or #git on Matrix. I'll put the instructions above somewhere on wiki.mozilla.org, and we can collaboratively iterate on them. Now, what the announcement didn't say is that the Git repository WILL NOT be gecko-dev, doesn't exist yet, and WON'T BE COMPATIBLE (trust me, it'll be for the better). Why did I make you do all the above, you ask? Because that won't be a problem. I'll have you covered, I promise. The upcoming release of git-cinnabar 0.7.0-b1 will have a way to smoothly switch between gecko-dev and the future repository (incidentally, that will also allow to switch from a pure git-cinnabar clone to a gecko-dev one, for the git-cinnabar users who have kept reading this far). What about git-cinnabar? With Mercurial going the way of the dodo at Mozilla, my own need for git-cinnabar will vanish. Legitimately, this begs the question whether it will still be maintained. I can't answer for sure. I don't have a crystal ball. However, the needs of the transition itself will motivate me to finish some long-standing things (like finalizing the support for pushing merges, which is currently behind an experimental flag) or implement some missing features (support for creating Mercurial branches). Git-cinnabar started as a Python script, it grew a sidekick implemented in C, which then incorporated some Rust, which then cannibalized the Python script and took its place. It is now close to 90% Rust, and 10% C (if you don't count the code from Git that is statically linked to it), and has sort of become my Rust playground (it's also, I must admit, a mess, because of its history, but it's getting better). So the day to day use with Mercurial is not my sole motivation to keep developing it. If it were, it would stay stagnant, because all the features I need are there, and the speed is not all that bad, although I know it could be better. Arguably, though, git-cinnabar has been relatively stagnant feature-wise, because all the features I need are there. So, no, I don't expect git-cinnabar to die along Mercurial use at Mozilla, but I can't really promise anything either. Final words That was a long post. But there was a lot of ground to cover. And I still skipped over a bunch of things. I hope I didn't bore you to death. If I did and you're still reading... what's wrong with you? ;) So this is the end of Mercurial at Mozilla. So long, and thanks for all the fish. But this is also the beginning of a transition that is not easy, and that will not be without hiccups, I'm sure. So fasten your seatbelts (plural), and welcome the change. To circle back to the clickbait title, did I really kill Mercurial at Mozilla? Of course not. But it's like I stumbled upon a few sparks and tossed a can of gasoline on them. I didn't start the fire, but I sure made it into a proper bonfire... and now it has turned into a wildfire. And who knows? 15 years from now, someone else might be looking back at how Mozilla picked Git at the wrong time, and that, had we waited a little longer, we would have picked some yet to come new horse. But hey, that's the tech cycle for you.

12 November 2023

Petter Reinholdtsen: New and improved sqlcipher in Debian for accessing Signal database

For a while now I wanted to have direct access to the Signal database of messages and channels of my Desktop edition of Signal. I prefer the enforced end to end encryption of Signal these days for my communication with friends and family, to increase the level of safety and privacy as well as raising the cost of the mass surveillance government and non-government entities practice these days. In August I came across a nice recipe on how to use sqlcipher to extract statistics from the Signal database explaining how to do this. Unfortunately this did not work with the version of sqlcipher in Debian. The sqlcipher package is a "fork" of the sqlite package with added support for encrypted databases. Sadly the current Debian maintainer announced more than three years ago that he did not have time to maintain sqlcipher, so it seemed unlikely to be upgraded by the maintainer. I was reluctant to take on the job myself, as I have very limited experience maintaining shared libraries in Debian. After waiting and hoping for a few months, I gave up the last week, and set out to update the package. In the process I orphaned it to make it more obvious for the next person looking at it that the package need proper maintenance. The version in Debian was around five years old, and quite a lot of changes had taken place upstream into the Debian maintenance git repository. After spending a few days importing the new upstream versions, realising that upstream did not care much for SONAME versioning as I saw library symbols being both added and removed with minor version number changes to the project, I concluded that I had to do a SONAME bump of the library package to avoid surprising the reverse dependencies. I even added a simple autopkgtest script to ensure the package work as intended. Dug deep into the hole of learning shared library maintenance, I set out a few days ago to upload the new version to Debian experimental to see what the quality assurance framework in Debian had to say about the result. The feedback told me the pacakge was not too shabby, and yesterday I uploaded the latest version to Debian unstable. It should enter testing today or tomorrow, perhaps delayed by a small library transition. Armed with a new version of sqlcipher, I can now have a look at the SQL database in ~/.config/Signal/sql/db.sqlite. First, one need to fetch the encryption key from the Signal configuration using this simple JSON extraction command:
/usr/bin/jq -r '."key"' ~/.config/Signal/config.json
Assuming the result from that command is 'secretkey', which is a hexadecimal number representing the key used to encrypt the database. Next, one can now connect to the database and inject the encryption key for access via SQL to fetch information from the database. Here is an example dumping the database structure:
% sqlcipher ~/.config/Signal/sql/db.sqlite
sqlite> PRAGMA key = "x'secretkey'";
sqlite> .schema
CREATE TABLE sqlite_stat1(tbl,idx,stat);
CREATE TABLE conversations(
      id STRING PRIMARY KEY ASC,
      json TEXT,
      active_at INTEGER,
      type STRING,
      members TEXT,
      name TEXT,
      profileName TEXT
    , profileFamilyName TEXT, profileFullName TEXT, e164 TEXT, serviceId TEXT, groupId TEXT, profileLastFetchedAt INTEGER);
CREATE TABLE identityKeys(
      id STRING PRIMARY KEY ASC,
      json TEXT
    );
CREATE TABLE items(
      id STRING PRIMARY KEY ASC,
      json TEXT
    );
CREATE TABLE sessions(
      id TEXT PRIMARY KEY,
      conversationId TEXT,
      json TEXT
    , ourServiceId STRING, serviceId STRING);
CREATE TABLE attachment_downloads(
    id STRING primary key,
    timestamp INTEGER,
    pending INTEGER,
    json TEXT
  );
CREATE TABLE sticker_packs(
    id TEXT PRIMARY KEY,
    key TEXT NOT NULL,
    author STRING,
    coverStickerId INTEGER,
    createdAt INTEGER,
    downloadAttempts INTEGER,
    installedAt INTEGER,
    lastUsed INTEGER,
    status STRING,
    stickerCount INTEGER,
    title STRING
  , attemptedStatus STRING, position INTEGER DEFAULT 0 NOT NULL, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync
      INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE stickers(
    id INTEGER NOT NULL,
    packId TEXT NOT NULL,
    emoji STRING,
    height INTEGER,
    isCoverOnly INTEGER,
    lastUsed INTEGER,
    path STRING,
    width INTEGER,
    PRIMARY KEY (id, packId),
    CONSTRAINT stickers_fk
      FOREIGN KEY (packId)
      REFERENCES sticker_packs(id)
      ON DELETE CASCADE
  );
CREATE TABLE sticker_references(
    messageId STRING,
    packId TEXT,
    CONSTRAINT sticker_references_fk
      FOREIGN KEY(packId)
      REFERENCES sticker_packs(id)
      ON DELETE CASCADE
  );
CREATE TABLE emojis(
    shortName TEXT PRIMARY KEY,
    lastUsage INTEGER
  );
CREATE TABLE messages(
        rowid INTEGER PRIMARY KEY ASC,
        id STRING UNIQUE,
        json TEXT,
        readStatus INTEGER,
        expires_at INTEGER,
        sent_at INTEGER,
        schemaVersion INTEGER,
        conversationId STRING,
        received_at INTEGER,
        source STRING,
        hasAttachments INTEGER,
        hasFileAttachments INTEGER,
        hasVisualMediaAttachments INTEGER,
        expireTimer INTEGER,
        expirationStartTimestamp INTEGER,
        type STRING,
        body TEXT,
        messageTimer INTEGER,
        messageTimerStart INTEGER,
        messageTimerExpiresAt INTEGER,
        isErased INTEGER,
        isViewOnce INTEGER,
        sourceServiceId TEXT, serverGuid STRING NULL, sourceDevice INTEGER, storyId STRING, isStory INTEGER
        GENERATED ALWAYS AS (type IS 'story'), isChangeCreatedByUs INTEGER NOT NULL DEFAULT 0, isTimerChangeFromSync INTEGER
        GENERATED ALWAYS AS (
          json_extract(json, '$.expirationTimerUpdate.fromSync') IS 1
        ), seenStatus NUMBER default 0, storyDistributionListId STRING, expiresAt INT
        GENERATED ALWAYS
        AS (ifnull(
          expirationStartTimestamp + (expireTimer * 1000),
          9007199254740991
        )), shouldAffectActivity INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), shouldAffectPreview INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), isUserInitiatedMessage INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'group-v2-change',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), mentionsMe INTEGER NOT NULL DEFAULT 0, isGroupLeaveEvent INTEGER
        GENERATED ALWAYS AS (
          type IS 'group-v2-change' AND
          json_array_length(json_extract(json, '$.groupV2Change.details')) IS 1 AND
          json_extract(json, '$.groupV2Change.details[0].type') IS 'member-remove' AND
          json_extract(json, '$.groupV2Change.from') IS NOT NULL AND
          json_extract(json, '$.groupV2Change.from') IS json_extract(json, '$.groupV2Change.details[0].aci')
        ), isGroupLeaveEventFromOther INTEGER
        GENERATED ALWAYS AS (
          isGroupLeaveEvent IS 1
          AND
          isChangeCreatedByUs IS 0
        ), callId TEXT
        GENERATED ALWAYS AS (
          json_extract(json, '$.callId')
        ));
CREATE TABLE sqlite_stat4(tbl,idx,neq,nlt,ndlt,sample);
CREATE TABLE jobs(
        id TEXT PRIMARY KEY,
        queueType TEXT STRING NOT NULL,
        timestamp INTEGER NOT NULL,
        data STRING TEXT
      );
CREATE TABLE reactions(
        conversationId STRING,
        emoji STRING,
        fromId STRING,
        messageReceivedAt INTEGER,
        targetAuthorAci STRING,
        targetTimestamp INTEGER,
        unread INTEGER
      , messageId STRING);
CREATE TABLE senderKeys(
        id TEXT PRIMARY KEY NOT NULL,
        senderId TEXT NOT NULL,
        distributionId TEXT NOT NULL,
        data BLOB NOT NULL,
        lastUpdatedDate NUMBER NOT NULL
      );
CREATE TABLE unprocessed(
        id STRING PRIMARY KEY ASC,
        timestamp INTEGER,
        version INTEGER,
        attempts INTEGER,
        envelope TEXT,
        decrypted TEXT,
        source TEXT,
        serverTimestamp INTEGER,
        sourceServiceId STRING
      , serverGuid STRING NULL, sourceDevice INTEGER, receivedAtCounter INTEGER, urgent INTEGER, story INTEGER);
CREATE TABLE sendLogPayloads(
        id INTEGER PRIMARY KEY ASC,
        timestamp INTEGER NOT NULL,
        contentHint INTEGER NOT NULL,
        proto BLOB NOT NULL
      , urgent INTEGER, hasPniSignatureMessage INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE sendLogRecipients(
        payloadId INTEGER NOT NULL,
        recipientServiceId STRING NOT NULL,
        deviceId INTEGER NOT NULL,
        PRIMARY KEY (payloadId, recipientServiceId, deviceId),
        CONSTRAINT sendLogRecipientsForeignKey
          FOREIGN KEY (payloadId)
          REFERENCES sendLogPayloads(id)
          ON DELETE CASCADE
      );
CREATE TABLE sendLogMessageIds(
        payloadId INTEGER NOT NULL,
        messageId STRING NOT NULL,
        PRIMARY KEY (payloadId, messageId),
        CONSTRAINT sendLogMessageIdsForeignKey
          FOREIGN KEY (payloadId)
          REFERENCES sendLogPayloads(id)
          ON DELETE CASCADE
      );
CREATE TABLE preKeys(
        id STRING PRIMARY KEY ASC,
        json TEXT
      , ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE signedPreKeys(
        id STRING PRIMARY KEY ASC,
        json TEXT
      , ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE badges(
        id TEXT PRIMARY KEY,
        category TEXT NOT NULL,
        name TEXT NOT NULL,
        descriptionTemplate TEXT NOT NULL
      );
CREATE TABLE badgeImageFiles(
        badgeId TEXT REFERENCES badges(id)
          ON DELETE CASCADE
          ON UPDATE CASCADE,
        'order' INTEGER NOT NULL,
        url TEXT NOT NULL,
        localPath TEXT,
        theme TEXT NOT NULL
      );
CREATE TABLE storyReads (
        authorId STRING NOT NULL,
        conversationId STRING NOT NULL,
        storyId STRING NOT NULL,
        storyReadDate NUMBER NOT NULL,
        PRIMARY KEY (authorId, storyId)
      );
CREATE TABLE storyDistributions(
        id STRING PRIMARY KEY NOT NULL,
        name TEXT,
        senderKeyInfoJson STRING
      , deletedAtTimestamp INTEGER, allowsReplies INTEGER, isBlockList INTEGER, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync INTEGER);
CREATE TABLE storyDistributionMembers(
        listId STRING NOT NULL REFERENCES storyDistributions(id)
          ON DELETE CASCADE
          ON UPDATE CASCADE,
        serviceId STRING NOT NULL,
        PRIMARY KEY (listId, serviceId)
      );
CREATE TABLE uninstalled_sticker_packs (
        id STRING NOT NULL PRIMARY KEY,
        uninstalledAt NUMBER NOT NULL,
        storageID STRING,
        storageVersion NUMBER,
        storageUnknownFields BLOB,
        storageNeedsSync INTEGER NOT NULL
      );
CREATE TABLE groupCallRingCancellations(
        ringId INTEGER PRIMARY KEY,
        createdAt INTEGER NOT NULL
      );
CREATE TABLE IF NOT EXISTS 'messages_fts_data'(id INTEGER PRIMARY KEY, block BLOB);
CREATE TABLE IF NOT EXISTS 'messages_fts_idx'(segid, term, pgno, PRIMARY KEY(segid, term)) WITHOUT ROWID;
CREATE TABLE IF NOT EXISTS 'messages_fts_content'(id INTEGER PRIMARY KEY, c0);
CREATE TABLE IF NOT EXISTS 'messages_fts_docsize'(id INTEGER PRIMARY KEY, sz BLOB);
CREATE TABLE IF NOT EXISTS 'messages_fts_config'(k PRIMARY KEY, v) WITHOUT ROWID;
CREATE TABLE edited_messages(
        messageId STRING REFERENCES messages(id)
          ON DELETE CASCADE,
        sentAt INTEGER,
        readStatus INTEGER
      , conversationId STRING);
CREATE TABLE mentions (
        messageId REFERENCES messages(id) ON DELETE CASCADE,
        mentionAci STRING,
        start INTEGER,
        length INTEGER
      );
CREATE TABLE kyberPreKeys(
        id STRING PRIMARY KEY NOT NULL,
        json TEXT NOT NULL, ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE callsHistory (
        callId TEXT PRIMARY KEY,
        peerId TEXT NOT NULL, -- conversation id (legacy)   uuid   groupId   roomId
        ringerId TEXT DEFAULT NULL, -- ringer uuid
        mode TEXT NOT NULL, -- enum "Direct"   "Group"
        type TEXT NOT NULL, -- enum "Audio"   "Video"   "Group"
        direction TEXT NOT NULL, -- enum "Incoming"   "Outgoing
        -- Direct: enum "Pending"   "Missed"   "Accepted"   "Deleted"
        -- Group: enum "GenericGroupCall"   "OutgoingRing"   "Ringing"   "Joined"   "Missed"   "Declined"   "Accepted"   "Deleted"
        status TEXT NOT NULL,
        timestamp INTEGER NOT NULL,
        UNIQUE (callId, peerId) ON CONFLICT FAIL
      );
[ dropped all indexes to save space in this blog post ]
CREATE TRIGGER messages_on_view_once_update AFTER UPDATE ON messages
      WHEN
        new.body IS NOT NULL AND new.isViewOnce = 1
      BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
      END;
CREATE TRIGGER messages_on_insert AFTER INSERT ON messages
      WHEN new.isViewOnce IS NOT 1 AND new.storyId IS NULL
      BEGIN
        INSERT INTO messages_fts
          (rowid, body)
        VALUES
          (new.rowid, new.body);
      END;
CREATE TRIGGER messages_on_delete AFTER DELETE ON messages BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
        DELETE FROM sendLogPayloads WHERE id IN (
          SELECT payloadId FROM sendLogMessageIds
          WHERE messageId = old.id
        );
        DELETE FROM reactions WHERE rowid IN (
          SELECT rowid FROM reactions
          WHERE messageId = old.id
        );
        DELETE FROM storyReads WHERE storyId = old.storyId;
      END;
CREATE VIRTUAL TABLE messages_fts USING fts5(
        body,
        tokenize = 'signal_tokenizer'
      );
CREATE TRIGGER messages_on_update AFTER UPDATE ON messages
      WHEN
        (new.body IS NULL OR old.body IS NOT new.body) AND
         new.isViewOnce IS NOT 1 AND new.storyId IS NULL
      BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
        INSERT INTO messages_fts
          (rowid, body)
        VALUES
          (new.rowid, new.body);
      END;
CREATE TRIGGER messages_on_insert_insert_mentions AFTER INSERT ON messages
      BEGIN
        INSERT INTO mentions (messageId, mentionAci, start, length)
        
    SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci,
      bodyRanges.value ->> 'start' as start,
      bodyRanges.value ->> 'length' as length
    FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges
    WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL
  
        AND messages.id = new.id;
      END;
CREATE TRIGGER messages_on_update_update_mentions AFTER UPDATE ON messages
      BEGIN
        DELETE FROM mentions WHERE messageId = new.id;
        INSERT INTO mentions (messageId, mentionAci, start, length)
        
    SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci,
      bodyRanges.value ->> 'start' as start,
      bodyRanges.value ->> 'length' as length
    FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges
    WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL
  
        AND messages.id = new.id;
      END;
sqlite>
Finally I have the tool needed to inspect and process Signal messages that I need, without using the vendor provided client. Now on to transforming it to a more useful format. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

4 November 2023

Dirk Eddelbuettel: RcppEigen 0.3.3.9.4 on CRAN: Maintenance, Matrix Changes

A new release 0.3.3.9.4 of RcppEigen arrived on CRAN yesterday, and went to Debian today. Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms. This update contains a small amount of the usual maintenance (see below), along with a very nice pull request by Mikael Jagan which simplifies to interface with the Matrix package and inparticular the CHOLMOD library that is part of SuiteSparse. This release is coordinated with lme4 and OpenMx which are also being updated. The complete NEWS file entry follows.

Changes in RcppEigen version 0.3.3.9.4 (2023-11-01)
  • The CITATION file has been updated for the new bibentry style.
  • The package skeleton generator has been updated and no longer sets an Imports:.
  • Some README.md URLs and badged have been updated.
  • The use of -fopenmp has been documented in Makevars, and a simple thread-count reporting function has been added.
  • The old manual src/init.c has been replaced by an autogenerated version, the RcppExports file have regenerated
  • The interface to package Matrix has been updated and simplified thanks to an excllent patch by Mikael Jagan.
  • The new upload is coordinated with packages lme4 and OpenMx.

Courtesy of CRANberries, there is also a diffstat report for the most recent release. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

2 November 2023

Fran ois Marier: Upgrading from Debian 11 bullseye to 12 bookworm

Over the last few months, I upgraded my Debian machines from bullseye to bookworm. The process was uneventful, but I ended up reconfiguring several things afterwards in order to modernize my upgraded machines.

Logcheck I noticed in this release that the transition to journald is essentially complete. This means that rsyslog is no longer needed on most of my systems:
apt purge rsyslog
Once that was done, I was able to comment out the following lines in /etc/logcheck/logcheck.logfiles.d/syslog.logfiles:
#/var/log/syslog
#/var/log/auth.log
I did have to adjust some of my custom logcheck rules, particularly the ones that deal with kernel messages:
--- a/logcheck/ignore.d.server/local-kernel
+++ b/logcheck/ignore.d.server/local-kernel
@@ -1,1 +1,1 @@
-^\w 3  [ :[:digit:]] 11  [._[:alnum:]-]+ kernel: \[[0-9. ]+]\ IN=eno1 OUT= MAC=[0-9a-f:]+ SRC=[0-9a-f.:]+
+^\w 3  [ :[:digit:]] 11  [._[:alnum:]-]+ kernel: (\[[0-9. ]+]\ )?IN=eno1 OUT= MAC=[0-9a-f:]+ SRC=[0-9a-f.:]+
Then I moved local entries from /etc/logcheck/logcheck.logfiles to /etc/logcheck/logcheck.logfiles.d/local.logfiles (/var/log/syslog and /var/log/auth.log are enabled by default when needed) and removed some files that are no longer used:
rm /var/log/mail.err*
rm /var/log/mail.warn*
rm /var/log/mail.info*
Finally, I had to fix any unescaped characters in my local rules. For example error == NULL \*error == NULL must now be written as error == NULL \ \ \*error == NULL.

Networking After the upgrade, I got a notice that the isc-dhcp-client is now deprecated and so I removed if from my system:
apt purge isc-dhcp-client
This however meant that I need to ensure that my network configuration software does not depend on the now-deprecated DHCP client. On my laptop, I was already using NetworkManager for my main network interfaces and that has built-in DHCP support.

Migration to systemd-networkd On my backup server, I took this opportunity to switch from ifupdown to systemd-networkd by removing ifupdown:
apt purge ifupdown
rm /etc/network/interfaces
putting the following in /etc/systemd/network/20-wired.network:
[Match]
Name=eno1
[Network]
DHCP=yes
MulticastDNS=yes
and then enabling/starting systemd-networkd:
systemctl enable systemd-networkd
systemctl start systemd-networkd
I also needed to install polkit:
apt install --no-install-recommends policykit-1
in order to allow systemd-networkd to set the hostname. In order to start my firewall automatically as interfaces are brought up, I wrote a dispatcher script to apply my existing iptables rules.

Migration to predictacle network interface names On my Linode server, I did the same as on the backup server, but I put the following in /etc/systemd/network/20-wired.network since it has a static IPv6 allocation:
[Match]
Name=enp0s4
[Network]
DHCP=yes
Address=2600:3c01::xxxx:xxxx:xxxx:939f/64
Gateway=fe80::1
and switched to predictable network interface names by deleting these two files:
  • /etc/systemd/network/50-virtio-kernel-names.link
  • /etc/systemd/network/99-default.link
and then changing eth0 to enp0s4 in:
  • /etc/network/iptables.up.rules
  • /etc/network/ip6tables.up.rules
  • /etc/rc.local (for OpenVPN)
  • /etc/logcheck/ignored.d.*/*
Then I regenerated all initramfs:
update-initramfs -u -k all
and rebooted the virtual machine. Giving systemd-resolved control of /etc/resolv.conf After reading this history of DNS resolution on Linux, I decided to modernize my resolv.conf setup and let systemd-resolved handle /etc/resolv.conf. I installed the package:
apt install systemd-resolved
and then removed no-longer-needed packages:
apt purge resolvconf avahi-daemon
I also disabled support for Link-Local Multicast Name Resolution (LLMNR) after reading this person's reasoning by putting the following in /etc/systemd/resolved.conf.d/llmnr.conf:
[Resolve]
LLMNR=no
I verified that mDNS is enabled and LLMNR is disabled:
$ resolvectl mdns
Global: yes
Link 2 (enp0s25): yes
Link 3 (wlp3s0): yes
$ resolvectl llmnr
Global: no
Link 2 (enp0s25): no
Link 3 (wlp3s0): no
Note that if you want auto-discovery of local printers using CUPS, you need to keep avahi-daemon since cups-browsed doesn't support systemd-resolved. You can verify that it works using:
sudo lpinfo --include-schemes dnssd -v

Dynamic DNS I replaced ddclient with inadyn since it doesn't work with no-ip.com anymore, using the configuration I described in an old blog post.

chkrootkit I moved my customizations in /etc/chkrootkit.conf to /etc/chkrootkit/chkrootkit.conf after seeing this message in my logs:
WARNING: /etc/chkrootkit.conf is deprecated. Please put your settings in /etc/chkrootkit/chkrootkit.conf instead: /etc/chkrootkit.conf will be ignored in a future release and should be deleted.

ssh As mentioned in Debian bug#1018106, to silence the following warnings:
sshd[6283]: pam_env(sshd:session): deprecated reading of user environment enabled
I changed the following in /etc/pam.d/sshd:
--- a/pam.d/sshd
+++ b/pam.d/sshd
@@ -44,7 +44,7 @@ session    required     pam_limits.so
 session    required     pam_env.so # [1]
 # In Debian 4.0 (etch), locale-related environment variables were moved to
 # /etc/default/locale, so read that as well.
-session    required     pam_env.so user_readenv=1 envfile=/etc/default/locale
+session    required     pam_env.so envfile=/etc/default/locale
 # SELinux needs to intervene at login time to ensure that the process starts
 # in the proper default security context.  Only sessions which are intended
I also made the following changes to /etc/ssh/sshd_config.d/local.conf based on the advice of ssh-audit 2.9.0:
-KexAlgorithms curve25519-sha256@libssh.org,curve25519-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha256
+KexAlgorithms curve25519-sha256@libssh.org,curve25519-sha256,sntrup761x25519-sha512@openssh.com,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512

1 November 2023

Dirk Eddelbuettel: RcppArmadillo 0.12.6.6.0 on CRAN: Bugfix, Thread Throttling

armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 1110 other packages on CRAN, downloaded 31.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 563 times according to Google Scholar. This release brings upstream bugfix releases 12.6.5 (sparse matrix corner case) and 12.6.6 with an ARPACK correction. Conrad released it this this morning, I had been running reverse dependency checks anyway and knew we were in good shape so for once I did not await a full run against the now over 1100 (!!) packages using RcppArmadillo. This release also contains a change I prepared on Sunday and which helps with much-criticized (and rightly I may add) insistence by CRAN concerning throttling . The motivation is understandable: CRAN tests many packages at once on beefy servers and can ill afford tests going off and requesting numerous cores. But rather than providing a global setting at their end, CRAN insists that each package (!!) deals with this. The recent traffic on the helpful-as-ever r-pkg-devel mailing clearly shows that this confuses quite a few package developers. Some have admitted to simply turning examples and tests off: a net loss for all of us. Now, Armadillo defaults to using up to eight cores (which is enough to upset CRAN) when running with OpenMP (which is generally only on Linux for reasons I rather not get into ). With this release I expose a helper functions (from OpenMP) to limit this. I also set up an example package and repo RcppArmadilloOpenMPEx detailing this, and added a demonstration of how to use the new throttlers to the fastLm example. I hope this proves useful to users of the package. The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.6.6.0 (2023-10-31)
  • Upgraded to Armadillo release 12.6.6 (Cortisol Retox)
    • Fix eigs_sym(), eigs_gen() and svds() to generate deterministic results in ARPACK mode
  • Add helper functions to set and get the number of OpenMP threads
  • Store initial thread count at package load and use in thread-throttling helper (and resetter) suitable for CRAN constraints

Changes in RcppArmadillo version 0.12.6.5.0 (2023-10-14)
  • Upgraded to Armadillo release 12.6.5 (Cortisol Retox)
    • Fix for corner-case bug in handling sparse matrices with no non-zero elements

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

23 October 2023

Jonathan Dowland: cherished

minidisc player
iPod
Bose headphones
If I think back to technology I've used and really cherished, quite often they're audio-related: Minidisc players, Walkmans, MP3 players, headphones. These pieces of technology served as vessels to access music, which of course I often have fond emotional connection to. And so I think the tech has benefited from that, and in some way the fondness or emotional connection to music has somewhat transferred or rubbed-off on the technology to access it. Put another way, no matter how well engineered it was, how easy it was to use or how well it did the job, I doubt I'd have fond memories, years later, of a toilet brush. I wonder if the same "bleeding" of fondness applies to brands, too. If so, and if you were a large tech company, it would be worth having some audio gear in your portfolio. I think Sony must have benefited from this. Apple too. on-ear phones For listening on-the-go, I really like on-ear headphones, as opposed to over-ear. I have some lovely over-ear phones for listening-at-rest, but they get my head too hot when I'm active. The on-ears are a nice compromise between comfort and quality of over-ear, and portability of in-ear. Most of the ones I've owned have folded up nicely into a coat pocket too. My current Bose pair are from 2019 and might be towards the end of their life. They replaced some AKG K451s, which were also discontinued. Last time I looked (2019) the Sony offerings in this product category were not great. That might have changed. But I fear that the manufacturers have collectively decided this product category isn't worth investing in.

22 October 2023

Aigars Mahinovs: Figuring out finances part 3

So now that I have something that looks very much like a budgeting setup going, I am going to .. delete it! Why? Well, at the end of the last part of this, the Firefly III instance was running on a tiny Debian server in a Docker container right next to another Docker container that is running the main user of this server - a Home Assistant instance that has been managing my home for several years already. So why change that? See, there is one bit of knowledge that is very crucial to your Home Assistant experience, which is not really emphasised enough in the Home Assistant documentation. In fact back when I was getting into the Home Assistant both the main documentation and basically all the guides around were just coming off the hype of Docker disrupting everything and that is a big reason why everyone suggested to install and use Home Assistant as a Docker container on top of any kind of stable OS. In fact I used to run it for years on my TerraMaster NAS, just so that I don't have a separate home server running 24/7 at home and just have everything inside the very compact NAS case. So here is the thing you NEED to know - Home Assistant Container is DEMO version of Home Assistant! If you want to have a full Home Assistant experience and use the knowledge of the huge community around the HA space, you have to use the Home Assistant OS. Ideally on dedicated hardware. Ideally on HA Green box, but any tiny PC would also work great. Raspberry Pi 4+ is common, but quite weak as the network size grows and especially the SD card for storage gets old very fast. Get a real small x86 PC with at least 4Gb RAM and a NVME SSD (eMMC is fine too). You want to have an Ethernet port and a few free USB ports. I would also suggest immediately getting HA SkyConnect adapter that can do Zigbee networking and will do Matter soon (tm). I am making do with a SonOff Zigbee gateway, but it is quite hacky to get working and your whole Zigbee communication breaks down if the WiFi goes down - suboptimal. So I took a backup of the Home Assistant instance using it's build-in tools. I took an export of my fully configured Firefly III instance and proceeded to wipe the drive of the NUC. That was not a smart idea. :D On the Home Assitant side I was really frustrated by the documentation that was really focused on users that are (likely) using Windows and are using an SD card in something like Raspberry Pi to get Home Assistant OS running. It recommended downloading Etcher to write the image to the boot medium. That is a really weird piece of software that managed to actually crash consistently when I was trying to run it from Debian Live or Ubuntu Live on my NUC. It took me way too long to give up and try something much simpler - dd. xzcat haos_generic-x86-64-11.0.img.xz dd of=/dev/mmcblk0 bs=1M That just worked, prefectly and really fast. If you want to use a GUI in a live environment, then just using the gnome-disk-utility ("Disks" in Gnome menu) and using the "Restore Disk Image ..." on a partition would work just as well. It even supports decompressing the XZ images directly while writing. But that image is small, will it not have a ton of unused disk space behind the fixed install partition? Yes, it will ... until first boot. The HA OS takes over the empty space after its install partition on the first boot-up and just grows its main partition to take up all the remaining space. Smart. After first boot is completed, the first boot wizard can be accessed via your web browser and one of the prominent buttons there is restoring from backup. So you just give it the backup file and wait. Sadly the restore does not actually give any kind of progress, so your only way to figure out when it is done is opening the same web adress in another browser tab and refresh periodically - after restoring from backup it just boots into the same config at it had before - all the settings, all the devices, all the history is preserved. Even authentification tokens are preserved so if yu had a Home Assitant Mobile installed on your phone (both for remote access and to send location info and phone state, like charging, to HA to trigger automations) then it will just suddenly start working again without further actions needed from your side. That is an almost perfect backup/restore experience. The first thing you get for using the OS version of HA is easy automatic update that also automatically takes a backup before upgrade, so if anything breaks you can roll back with one click. There is also a command-line tool that allows to upgrade, but also downgrade ha-core and other modules. I had to use it today as HA version 23.10.4 actually broke support for the Sonoff bridge that I am using to control Zigbee devices, which are like 90% of all smart devices in my home. Really helpful stuff, but not a must have. What is a must have and that you can (really) only get with Home Assistant Operating System are Addons. Some addons are just normal servers you can run alongside HA on the same HA OS server, like MariaDB or Plex or a file server. That is not the most important bit, but even there the software comes pre-configured to use in a home server configuration and has a very simple config UI to pre-configure key settings, like users, passwords and database accesses for MariaDB - you can litereally in a few clicks and few strings make serveral users each with its own access to its own database. Couple more clicks and the DB is running and will be kept restarted in case of failures. But the real gems in the Home Assistant Addon Store are modules that extend Home Assitant core functionality in way that would be really hard or near impossible to configure in Home Assitant Container manually, especially because no documentation has ever existed for such manual config - everyone just tells you to install the addon from HA Addon store or from HACS. Or you can read the addon metadata in various repos and figure out what containers it actually runs with what settings and configs and what hooks it puts into the HA Core to make them cooperate. And then do it all over again when a new version breaks everything 6 months later when you have already forgotten everything. In the Addons that show up immediately after installation are addons to install the new Matter server, a MariaDB and MQTT server (that other addons can use for data storage and message exchange), Z-Wave support and ESPHome integration and very handy File manager that includes editors to edit Home Assitant configs directly in brower and SSH/Terminal addon that boht allows SSH connection and also a web based terminal that gives access to the OS itself and also to a comand line interface, for example, to do package downgrades if needed or see detailed logs. And also there is where you can get the features that are the focus this year for HA developers - voice enablers. However that is only a beginning. Like in Debian you can add additional repositories to expand your list of available addons. Unlike Debian most of the amazing software that is available for Home Assistant is outside the main, official addon store. For now I have added the most popular addon repository - HACS (Home Assistant Community Store) and repository maintained by Alexbelgium. The first includes things like NodeRED (a workflow based automation programming UI), Tailscale/Wirescale for VPN servers, motionEye for CCTV control, Plex for home streaming. HACS also includes a lot of HA UI enhacement modules, like themes, custom UI control panels like Mushroom or mini-graph-card and integrations that provide more advanced functions, but also require more knowledge to use, like Local Tuya - that is harder to set up, but allows fully local control of (normally) cloud-based devices. And it has AppDaemon - basically a Python based automation framework where you put in Python scrips that get run in a special environment where they get fed events from Home Assistant and can trigger back events that can control everything HA can and also do anything Python can do. This I will need to explore later. And the repository by Alex includes the thing that is actually the focus of this blog post (I know :D) - Firefly III addon and Firefly Importer addon that you can then add to your Home Assistant OS with a few clicks. It also has all kinds of addons for NAS management, photo/video server, book servers and Portainer that lets us setup and run any Docker container inside the HA OS structure. HA OS will detect this and warn you about unsupported processes running on your HA OS instance (nice security feature!), but you can just dismiss that. This will be very helpful very soon. This whole environment of OS and containers and apps really made me think - what was missing in Debian that made the talented developers behind all of that to spend the immense time and effor to setup a completely new OS and app infrastructure and develop a completel paraller developer community for Home Assistant apps, interfaces and configurations. Is there anything that can still be done to make HA community and the general open source and Debian community closer together? HA devs are not doing anything wrong: they are using the best open source can provide, they bring it to people whould could not install and use it otherwise, they are contributing fixes and improvements as well. But there must be some way to do this better, together. So I installed MariaDB, create a user and database for Firefly. I installed Firefly III and configured it to use the MariaDB with the web config UI. When I went into the Firefly III web UI I was confronted with the normal wizard to setup a new instance. And no reference to any backup restore. Hmm, ok. Maybe that goes via the Importer? So I make an access token again, configured the Importer to use that, configured the Nordlinger bank connection settings. Then I tried to import the export that I downloaded from Firefly III before. The importer did not auto-recognose the format. Turns out it is just a list of transactions ... It can only be barely useful if you first manually create all the asset accounts with the same names as before and even then you'll again have to deal with resolving the problem of transfers showing up twice. And all of your categories (that have not been used yet) are gone, your automation rules and bills are gone, your budgets and piggy banks are gone. Boooo. It will be easier for me to recreate my account data from bank exports again than to resolve data in that transaction export. Turns out that Firefly III documenation explicitly recommends making a mysqldump of your own and not rely on anything in the app itself for backup purposes. Kind of sad this was not mentioned in the export page that sure looked a lot like a backup :D After doing all that work all over again I needed to make something new not to feel like I wasted days of work for no real gain. So I started solving a problem I had for a while already - how do I add cash transations to the system when I am out of the house with just my phone in the hand? So far my workaround has been just sending myself messages in WhatsApp with the amount and description of any cash expenses. Two solutions are possible: app and bot. There are actually multiple Android-based phone apps that work with Firefly III API to do full financial management from the phone. However, after trying it out, that is not what I will be using most of the time. First of all this requires your Firefly III instance to be accessible from the Internet. Either via direct API access using some port forwarding and secured with HTTPS and good access tokens, or via a VPN server redirect that is installed on both HA and your phone. Tailscale was really easy to get working. But the power has its drawbacks - adding a new cash transaction requires opening the app, choosing new transaction view, entering descriptio, amount, choosing "Cash" as source account and optionally choosing destination expense account, choosing category and budget and then submitting the form to the server. Sadly none of that really works if you have no Internet or bad Internet at the place where you are using cash. And it's just too many steps. Annoying. An easier alternative is setting up a Telegram bot - it is running in a custom Docker container right next to your Firefly (via Portainer) and you talk to it via a custom Telegram chat channel that you create very easily and quickly. And then you can just tell it "Coffee 5" and it will create a transaction from the (default) cash account in 5 amount with description "Coffee". This part also works if you are offline at the moment - the bot will receive the message once you get back online. You can use Telegram bot menu system to edit the transaction to add categories or expense accounts, but this part only work if you are online. And the Firefly instance does not have to be online at all. Really nifty. So next week I will need to write up all the regular payments as bills in Firefly (again) and then I can start writing a Python script to predict my (financial) future!

15 October 2023

Aigars Mahinovs: Figuring out finances part 2

A week ago I started to migrate my financial planning from a closed source system to a new system based on an open source, self-hosted solution. Main candidate is Firefly III - a relatively simple financial planner with a rather rich feature set and a solid user base and developer support. Starting it up with a Docker-Compose file was quite easy, following the official documentation. The same Compose file also managed the MySQL database, the importer app and a cron container for regular imports. The separate importer app allows both imports from CSV files and also from external bank account connection services. For whatever weird reason both of those services support exporting data from my regular bank accounts, but not from my credit cards for exactly the same bank. So for those cards I would need to periodically download CVS transaction exports and feed them into the importer. The combination of the cron container and the importer app allows for both of these functions. To do this you first do the import via the Web UI first and configure all the mappings - configure file and date formats, map account names in the bank output to account names that you added in Firefly, configure what otehr columns mean and what is the mapping of other values, like expense and income accounts. At the end of the process you can download a json file representing all the settings of the import you just did. Putting such a json config file in the input folder of the cron container and telling it to do an import (weekly, for example) would do such import periodically. Putting a credit card CSV file along with a config file for credit card import would also auto-import that. So far so good. However, when I tried importing exports from MoneyWiz or even when I tried re-creating account history directly from the bank data, I hit a very annoying problem. Transfers. Thing is detecting transfers between the real (asset) accounts that you are managing is really essential. For one those are not real expenses and incomes, so failing to mark them as transfers would show weird income/expense numbers. But if you do detect them as transfers and correctly map the destination account, but fail to match the transations between another you get a double-transaction. This becomes really hard when transactions happen on different dates and have different descriptions. So you get both a +1000 transaction "Credit card reset" on 14.09.2023 in the credit card account and a -1000$ transaction "Repayment of own credit card" on 24.09.2023 in the main account of the bank. Matching them to recognise that it is just one transfer and not two is ... non-trivial. The best solution I could come up with so far is to always map the "Opposing account name" for such transfers to a virtual "Transfers" asset account. That has the benefit of being actually able to correctly represent transfers that take several days to move between accounts and showing you how much money is still in transit at any point in time. So after I figured this out, finally the account balances started matching up with reality. Setting up spending categories required another change - the author of the Firefly III does not like complexity so it does not support nesting categories. Categories can only be in a single, flat list. It is suggested that if you do need to track multiple categories and also their combination, then category groups might help you. I will survive for now by simplifying the categories that I do actually use. Might actually make the reports more usable. A new feature for me will be the ability to use automation rules to assign transactions to categories based on their contents, like expense accounts of keywords in descriptions. Setting up regular bills (with matching rules to assign incoming transactions to those bills) is another feature that is very important to me. The feature itself works just fine in Firefly III, but it has two restrictions that the author of the software does not actually want to be changed. For one it can not be used to track prodictable incomes (like salary), presumably because it is only there for bills (subscriptions in v3 UI). And for other, there is nothing in the base software that actually uses the data from bills to look forward. The author does not like to try to predict the future. Which for me is basically the one of two reasons to use this kind of software at all. I want to know - given all manually entered regular and 100% predictable expenses and incomes, what will be the state of my accounts at particular date? It seems that to get at that I will need to write my own oracle script using the rich Firefly III API. The last question I had for this week was - how will I enter a new cash puchace of potatoes on a farmers market into the system that sits in a private server in my home network? Turns out there are actually multiple Android apps for Firefly III that can be easily used for this using a robust OAuth or shared token authentification. Except the web UI needs to be exposed online for this to work (and ideally protected by HTTPS). Other users have also created Telegram bots that allow using chat messages to create transaction entries. This is a bit harder to use and more narrow scope, but should be easier to setup. Will have to try both apporaches. But before I really get going on this, I need to fix another thing that I have been postponing for a while: I will need to migrate my Home Assistant installation from a Docker container installation to a Home Assistant OS installation so that I can install Addons, including Firefly III, MySQL and Portainer to have a bit more organised and less hand-knitted home server setup. Let's see how that goes next week :D

13 October 2023

Reproducible Builds (diffoscope): diffoscope 251 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 251. This version includes the following changes:
* If the equivalent of  file -i  returns text/plain, fallback to comparing
  this file as a text file. This especially helps when file(1) miscategorises
  text files as some esoteric type. (Closes: Debian:#1053668)
* Update copyright years.
You find out more by visiting the project homepage.

11 October 2023

Iustin Pop: Not-quite-announcement: this blog is not entirely dead

Something, something then something else, and it s been another six months since I last wrote anything. The world is crazy, somehow there s no time for anything, and yet life move forward, inexorably. (Well, without going into gory side-notes about how evil stops life in various parts of the world.) As a proof that this blog is not dead, I ve finally implemented proper per-page keywords support, rather than the hard-coded keywords that were present before. Well, those defaults are still present in all pages that don t declare keywords, will probably remove them at one point. Implementing this in Hakyll is not hard, at all, if one semi-regularly writes code. There are a number of examples out there, but what was confusing: Anyway, after 2 files changed, 23 insertions(+), 8 deletions(-), this page is the first one to have any keywords. And the funny thing, after initially struggling to write a single line of Haskell, the final version of the keywords is something like this:
 
    where renderKeywords =
            return . escapeHtml . intercalate "," . map trim . splitAll ","
And I wrote that as is, compiled successfully, and did the right thing from the first try. I only write some Haskell code 2-3 times per year, but man, I love this language. Until next time, hopefully before 2024

Russell Coker: The PineTime

I have just got a PineTime smart watch [1] from Pine64. They cost $US27 each which ended up as $144.63 Australian for three including postage when I ordered on the 16th of September, it s annoying that you can t order more than 3 at a time to reduce postage costs. The Australian online store Kogan has smart watches starting at about $15 [2] with Bluetooth and support for phone notifications so the $48.21 for a PineTime doesn t compare well on just price and features. The watches Kogan sells start getting into high resolution at around the $25 price and many of them have features like 24*7 heart monitoring that the PineTime lacks (it just measures when you request it). No-one would order a PineTime for being cheap or having lots of features, you order it because you want open hardware that allows you to do things your way. Also the PineTime isn t going to be orphaned while it s likely that in a few years most of the cheap watches sold by Kogan etc won t support the new phones running the latest version of Android. The screen of the PineTime is 240*240 resolution (about 260dpi) with 64k colors. The screen resolution is lower than some high-end smart watches but higher than most phones and almost all monitors. I doubt that much benefit could be gained from higher resolution. Even on minimum brightness the screen is easy to read on all but the brightest sunny days. The compute capabilities are 4.5MB of flash storage, 64k of RAM, and a 64MHz CPU this can t run Linux and nothing like it will run Linux for a long time. I ve had the PineTime for 6 days now, I charged it once and it s now at 55% battery. It looks like it will last close to 2 weeks on a single charge and it s claimed that a newer firmware will make the battery last longer. Software The main Android app for using with the PineTime is GadgetBridge which I installed from the f-droid repository. It had lots of click-through menus for allowing access to various Android features (contacts, bluetooth, draw over foreground, location, and more) but after that it was easy to setup. It was the first bluetooth device I ve used which had a 6 digit PIN for connecting to a phone. Initially I used the PineTime with my Huawei Nova 7i [3]. The aim is to eventually have it run from my PinePhonePro but my test of the PinePhonePro didn t go as well as hoped [4]. Now I m using it on my Huawei Mate 10 Pro. It comes with InfiniTime [5] installed as the default firmware, mine had 1.11.0 which is a fairly recent version. I will probably upgrade it soon to get the better power optimisation and weather alerts in the watch face. I don t have any plans to use different watch firmware and I don t have any plans to contribute to firmware development I just can t hack on every FOSS project around it s better to do big contributions to a small number of projects. For people who don t want the default firmware the Wasp-OS project seems interesting as it s written in Python [6], I don t like Python but it s very popular. Python is particularly popular in ML development, it will be interesting to see if Wasp-OS becomes a preferred platform for smart watches that talk to GPT servers. Generally the software works well, one annoyance is that when a notification goes away on the phone it remains on the PineTime and has to be manually dismissed. It would be nice if clearing notifications on the phone would clear them on the PineTime too. The music control works with RocketPlayer on Android, it displays the track name and has options for pause/play and skipping forward and backward one track. Annoyingly the current firmware doesn t allow configuring the main screens, from the primary screen you swipe down for notifications, right for settings, up for menus, and there s nothing defined for swipe left. I d like to make swipe left the command to get to music control. Hardware It has a detachable band that appears to be within the common range of watch bands. According to the PineTime Wiki page [7] there are a selection of alternate bands that will fit it, but some don t because the band is recessed into the watch. It is IP67 rated which means you can probably wear it while swimming. The charging contacts are exposed on the bottom of the case which means that any chemicals left by pool water can be cleaned off and also as they are apparently not expected to be harmed by sweat and skin oil there shouldn t be a problem charging it. I have significant experience using a Samsung Galaxy S5 Mini which is rated at IP67 in swimming pools. I had two problems with the S5 Mini when getting out of the pool, firstly water in the headphone socket made the phone consider that it was in headphone mode and turn off the speakers and secondly it took hours to become dry enough to charge and after many swims the charge rate dropped presumably due to oxide on the contacts. There are reports of success when swimming with a PineTime. Generally it feels well made and appears more solid than the cheapest Kogan devices appear to be. Conclusion If I wanted monitoring for medical reasons then I would choose a different smart watch. I ve read about people doing things like tracking their body stats 24*7 and trying to discover useful things, the PineTime is not a good option for BioHacking type use. However if I did have a need for such things I d probably just buy a second smart watch and have one on each wrist. The PineTime generally works well. It s a pity it has fewer hardware features than closed devices that are cheaper. But having a firmware that can be continually improved by the community is good. The continually expanding use of mobile phone technology devices for custom use in corporations (such as mobile phone in custom case for scanning prices etc in a supermarket) has some potential for use with this. I can imagine someone adding some custom features to a PineTime for such use. When a supermarket chain has 200,000 employees (as Woolworths in Australia does) then paying for a few months of software development work to make a smart watch do specific things for that company could provide significant value. There are probably some business opportunities for FOSS developers to hack on extra hardware on a PineTime and write software to support it. I recommend that everyone who s into FOSS buy one of these. Preferably make a deal with two friends to get the minimum postage cost.

9 October 2023

Aigars Mahinovs: Figuring out finances part 1

I have been managing my finances and getting an overview of where I am financially and where I am going month-to-month for a few years already. That means that I already have a method of doing my finances and a method of thinking about them. So far this has been supported by a commerical tool called MoneyWiz. I was happy to pay them for the ability to have a solid product and be able to easily access my finances from my phone (to enter cash transactions directly in the field) and sync data across multiple devices with their cloud offering. However, MoneyWiz development stalled. So much so that they stopped updating their Android app and cancelled web version plans to just focus on a iPhone and MacOS clients. And even there the development did not seem very successful over the last year. So I am cancelling their subscription, extracting all my data (as CSV) and looking for alternatives. So let's start with the basics - what I want from finace software?
  • Open source and self-hosted
  • Accounts to represent real accounts in my bank, credit cards, cash account
  • Transactions to represent money moving in/out/between accounts
  • Tree of categories for transactions to categorise spending
  • Category reports on arbitrary time periods
  • Ability to see the expected balance on each account at any historic point in time from given transactions
  • Ability to do a corrective transaction - hmm, I don't know what happened, but right now I have X in cash
  • Adding past transactions before a corrective transaction should not change balance after correction
  • Ability to automatically import transactions from my bank
  • Ability to plan future, recurring transactions
  • Incoming transactions from bank import should be auto-matched with planned recuring transactions
  • Ability to see missed planned recurring transactions
  • Ability to see value deviations in planned recuring transactions
  • Ability to manually match a transaction with a planned recurring transaction for that month or skip a month
  • Ability to see a projected balance on my accounts at any future date given planned transactions
  • Credit card fully refilling its current (negative) balance to zero (from a given account) should be a plannable transaction
  • Accessible from multiple devices, like Linux, Android, iPhone, MacOS. Most likely that means web-based
  • Achiving accounts without loosing their past transactions with active accounts
  • API to access data, so that I could wire that up to a Home Assistant dashboard
I don't care about currencies (thanks Euro!), localization, taxes and double-entry bookkeeping. I don't have much use for tags, budgets or savings targets. It could be useful to be able to track stock investment values (but I do have a pretty good from the bank that does that anyway). And tracking loans (incoming and outgoing) could be a worthwhile thing as well, but that can also be simulated with separate accounts for each loan. Tracking refundable expenses (like medical expenses that insurances should refund later on) would also be nice. Could also be simulated by having a separate expense category (Medical - refundable) and putting the refund as a negative expense into that category. One weird feature that I really like is having transfer transactions that have arbitrary incoming and outgoing dates. For some transactions it is a simple transfer delay - money leaves the account A on Friday and only appears in the destination account B on Monday morning. However, this can also happen the other way around - a few credit cards I have seen work in a way that on 14th of each month the balance of the credit card gets reset to zero and then a week later the negative balance that was on that credit card gets taken from the base account that the card is bound to. So the money leaves the base account a week later than it arrives in the credit card account. Financial systems really are magic sometimes :D Ideally that should be represented by a single transaction with two dates - one date when money leaves one account and another date when it arrives in a different account, so that balances on both accounts in the days between would be correct (as in matching what the bank says they were on those dates). And it should automatch such transaction from bank data imports. And predict its value from the negative balance of the credit card. And a pony! But in worst case this could be simulated using a separate "In transfer" account. When I am thinking about my finances, the key metric for me is - how much will I spend this month - both in fixed spending (rent, Internet and phone bills, health insurance, subscriptions, ...) and in variable spending (groceries, eating out, clothing, ...). In theory that should be trivial, but in practice a simple calendar month view will fail because a bunch of fixed expenses occur at the month end/start boundary and can happen either at the end of previous month or at start on next month, depending on how the weekends happen to fall and how the systems of various providers decided to do direct debits this month. So 1st of month is not a good snapshoting time. Any date between 6th (last possible days for monthly bills) and 20th (earliest possible salary day) would be a better cut off point. Looking at the balance of my main account at that time point lets me know what was the difference between income and expenses for the past month. And that is the number I want the financial app to be able to predict as soon as possible. And additional complication is the way credit cards work here. The expenses booked to a credit card after 14th of September will only appear in the main account on 20th of October and thus will actually only count towards the expenses of month of October. That makes it very confusing to see a larger overspend in the main account on 6th of October and then use the categories to try to figure out where the money went, because all the spending that happened on the credit card after the 14th of September should not really be looked at in that context, but spending on the main account or cash account should be counted all the way to 6th of October. :D I am not optimistic that any finance software would able to doea with this mess correctly. So far I am tending to consider Firefly III which has most of what I need and looks great and well maintained, but has the tiny drawback of being written in PHP, so that basically excludes me from ability to contribute anything beyond ideas and feedback. :D I already did a quick test of Firefly via local Docker containers, so the next step would be to try to set it up as a production instance (using the same always-on mini PC that is running my Home Assistant instance), get the database up and working, get the Firefly III itself up and running, get account importer working for bank connection, import historical transactions from there, setup my categories and recurring transactions, import past data dumped out of MoneyWiz and see if this works well and gives me the insights I need. If you know a better solution, please let me know on any social or email channel!

6 October 2023

Russ Allbery: Review: The Far Reaches

Review: The Far Reaches, edited by John Joseph Adams
Publisher: Amazon Original Stories
Copyright: June 2023
ISBN: 1-6625-1572-3
ISBN: 1-6625-1622-3
ISBN: 1-6625-1503-0
ISBN: 1-6625-1567-7
ISBN: 1-6625-1678-9
ISBN: 1-6625-1533-2
Format: Kindle
Pages: 219
Amazon has been releasing anthologies of original short SFF with various guest editors, free for Amazon Prime members. I previously tried Black Stars (edited by Nisi Shawl and Latoya Peterson) and Forward (edited by Blake Crouch). Neither were that good, but the second was much worse than the first. Amazon recently released a new collection, this time edited by long-standing SFF anthology editor John Joseph Adams and featuring a new story by Ann Leckie, which sounded promising enough to give them another chance. The definition of insanity is doing the same thing over and over again and expecting different results. As with the previous anthologies, each story is available separately for purchase or Amazon Prime "borrowing" with separate ISBNs. The sidebar cover is for the first in the sequence. Unlike the previous collections, which were longer novelettes or novellas, my guess is all of these are in the novelette range. (I did not do a word count.) If you're considering this anthology, read the Okorafor story ("Just Out of Jupiter's Reach"), consider "How It Unfolds" by James S.A. Corey, and avoid the rest. "How It Unfolds" by James S.A. Corey: Humans have invented a new form of physics called "slow light," which can duplicate any object that is scanned. The energy expense is extremely high, so the result is not a post-scarcity paradise. What the technology does offer, however, is a possible route to interstellar colonization: duplicate a team of volunteers and a ship full of bootstrapping equipment, and send copies to a bunch of promising-looking exoplanets. One of them might succeed. The premise is interesting. The twists Corey adds on top are even better. What can be duplicated once can be duplicated again, perhaps with more information. This is a lovely science fiction idea story that unfortunately bogs down because the authors couldn't think of anywhere better to go with it than relationship drama. I found the focus annoying, but the ideas are still very neat. (7) "Void" by Veronica Roth: A maintenance worker on a slower-than-light passenger ship making the run between Sol and Centauri unexpectedly is called to handle a dead body. A passenger has been murdered, two days outside the Sol system. Ace is in no way qualified to investigate the murder, nor is it her job, but she's watched a lot of crime dramas and she has met the victim before. The temptation to start poking around is impossible to resist. It's been a long time since I've read a story built around the differing experiences of time for people who stay on planets and people who spend most of their time traveling at relativistic speeds. It's a bit of a retro idea from an earlier era of science fiction, but it's still a good story hook for a murder mystery. None of the characters are that memorable and Roth never got me fully invested in the story, but this was still a pleasant way to pass the time. (6) "Falling Bodies" by Rebecca Roanhorse: Ira is the adopted son of a Genteel senator. He was a social experiment in civilizing the humans: rescue a human orphan and give him the best of Genteel society to see if he could behave himself appropriately. The answer was no, which is how Ira finds himself on Long Reach Station with a parole officer and a schooling opportunity, hopefully far enough from his previous mistakes for a second chance. Everyone else seems to like Rebecca Roanhorse's writing better than I do, and this is no exception. Beneath the veneer of a coming-of-age story with a twist of political intrigue, this is brutal, depressing, and awful, with an ending that needs a lot of content warnings. I'm sorry that I read it. (3) "The Long Game" by Ann Leckie: The Imperial Radch trilogy are some of my favorite science fiction novels of all time, but I am finding Leckie's other work a bit hit and miss. I have yet to read a novel of hers that I didn't like, but the short fiction I've read leans more heavily into exploring weird and alien perspectives, which is not my favorite part of her work. This story is firmly in that category: the first-person protagonist is a small tentacled alien creature, a bit like a swamp-dwelling octopus. I think I see what Leckie is doing here: balancing cynicism and optimism, exploring how lifespans influence thinking and planning, and making some subtle points about colonialism. But as a reading experience, I didn't enjoy it. I never liked any of the characters, and the conclusion of the story is the unsettling sort of main-character optimism that seems rather less optimistic to the reader. (4) "Just Out of Jupiter's Reach" by Nnedi Okorafor: K rm n scientists have found a way to grow living ships that can achieve a symbiosis with a human pilot, but the requirements for that symbiosis are very strict and hard to predict. The result was a planet-wide search using genetic testing to find the rare and possibly nonexistent matches. They found seven people. The deal was simple: spend ten years in space, alone, in her ship. No contact with any other human except at the midpoint, when the seven ships were allowed to meet up for a week. Two million euros a year, for as long as she followed the rules, and the opportunity to be part of a great experiment, providing data that will hopefully lead to humans becoming a spacefaring species. The core of this story is told during the seven days in the middle of the mission, and thus centers on people unfamiliar with human contact trying to navigate social relationships after five years in symbiotic ships that reshape themselves to their whims and personalities. The ships themselves link so that the others can tour, which offers both a good opportunity for interesting description and a concretized metaphor about meeting other people. I adore symbiotic spaceships, so this story had me at the premise. The surface plot is very psychological, and I didn't entirely click with it, but the sense of wonder vibes beneath that surface were wonderful. It also feels fresh and new: I've seen most of the ideas before, but not presented or written this way, or approached from quite this angle. Definitely the best story of the anthology. (8) "Slow Time Between the Stars" by John Scalzi: This, on the other hand, was a complete waste of time, redeemed only by being the shortest "story" in the collection. "Story" is generous, since there's only one character and a very dry, linear plot that exists only to make a philosophical point. "Speculative essay" may be closer. The protagonist is the artificial intelligence responsible for Earth's greatest interstellar probe. It is packed with a repository of all of human knowledge and the raw material to create life. Its mission is to find an exoplanet capable of sustaining that life, and then recreate it and support it. The plot, such as it is, follows the AI's decision to abandon that mission and cut off contact with Earth, for reasons that it eventually explains. Every possible beat of this story hit me wrong. The sense of wonder attaches to the most prosaic things and skips over the moments that could have provoked real wonder. The AI is both unbelievable and irritating, with all of the smug self-confidence of an Internet reply guy. The prose is overwrought in all the wrong places ("the finger of God, offering the spark to animate the dirt of another world" would totally be this AI's profile quote under their forum avatar). The only thing I liked about the story is the ethical point that it slowly meanders into, which I think I might agree with and at least find plausible. But it's delivered by the sort of character I would actively leave rooms to avoid, in a style that's about as engrossing as a tax form. Avoid. (2) Rating: 5 out of 10

3 October 2023

Russ Allbery: Review: Monstrous Regiment

Review: Monstrous Regiment, by Terry Pratchett
Series: Discworld #31
Publisher: Harper
Copyright: October 2003
Printing: August 2014
ISBN: 0-06-230741-X
Format: Mass market
Pages: 457
Monstrous Regiment is the 31st Discworld novel, but it mostly stands by itself. You arguably could start here, although you would miss the significance of Vimes's presence and the references to The Truth. The graphical reading order guide puts it loosely after The Truth and roughly in the Industrial Revolution sequence, but the connections are rather faint.
There was always a war. Usually they were border disputes, the national equivalent of complaining that the neighbor was letting their hedge row grow too long. Sometimes they were bigger. Borogravia was a peace-loving country in the middle of treacherous, devious, warlike enemies. They had to be treacherous, devious, and warlike; otherwise, we wouldn't be fighting them, eh? There was always a war.
Polly's brother, who wanted nothing more than to paint (something that the god Nuggan and the ever-present Duchess certainly did not consider appropriate for a strapping young man), was recruited to fight in the war and never came back. Polly is worried about him and tired of waiting for news. Exit Polly, innkeeper's daughter, and enter the young lad Oliver Perks, who finds the army recruiters in a tavern the next town over. One kiss of the Duchess's portrait later, and Polly is a private in the Borogravian army. I suspect this is some people's favorite Discworld novel. If so, I understand why. It was not mine, for reasons that I'll get into, but which are largely not Pratchett's fault and fall more into the category of pet peeves. Pratchett has dealt with both war and gender in the same book before. Jingo is also about a war pushed by a ruling class for stupid reasons, and featured a substantial subplot about Nobby cross-dressing that turns into a deeper character re-evaluation. I thought the war part of Monstrous Regiment was weaker (this is part of my complaint below), but gender gets a considerably deeper treatment. Monstrous Regiment is partly about how arbitrary and nonsensical gender roles are, and largely about how arbitrary and abusive social structures can become weirdly enduring because they build up their own internally reinforcing momentum. No one knows how to stop them, and a lot of people find familiar misery less frightening than unknown change, so the structure continues despite serving no defensible purpose. Recently, there was a brief attempt in some circles to claim Pratchett posthumously for the anti-transgender cause in the UK. Pratchett's daughter was having none of it, and any Pratchett reader should have been able to reject that out of hand, but Monstrous Regiment is a comprehensive refutation written by Pratchett himself some twenty years earlier. Polly is herself is not transgender. She thinks of herself as a woman throughout the book; she's just pretending to be a boy. But she also rejects binary gender roles with the scathing dismissal of someone who knows first-hand how superficial they are, and there is at least one transgender character in this novel (although to say who would be a spoiler). By the end of the book, you will have no doubt that Pratchett's opinion about people imposing gender roles on others is the same as his opinion about every other attempt to treat people as things. That said, by 2023 standards the treatment of gender here seems... naive? I think 2003 may sadly have been a more innocent time. We're now deep into a vicious backlash against any attempt to question binary gender assignment, but very little of that nastiness and malice is present here. In one way, this is a feature; there's more than enough of that in real life. However, it also makes the undermining of gender roles feel a bit too easy. There are good in-story reasons for why it's relatively simple for Polly to pass as a boy, but I still spent a lot of the book thinking that passing as a private in the army would be a lot harder and riskier than this. Pratchett can't resist a lot of cross-dressing and gender befuddlement jokes, all of which are kindly and wry but (at least for me) hit a bit differently in 2023 than they would have in 2003. The climax of the story is also a reference to a classic UK novel that to even name would be to spoil one or both of the books, but which I thought pulled the punch of the story and dissipated a lot of the built-up emotional energy. My larger complaints, though, are more idiosyncratic. This is a war novel about the enlisted ranks, including the hazing rituals involved in joining the military. There are things I love about military fiction, but apparently that reaction requires I have some sympathy for the fight or the goals of the institution. Monstrous Regiment falls into the class of war stories where the war is pointless and the system is abusive but the camaraderie in the ranks makes service oddly worthwhile, if not entirely justifiable. This is a real feeling that many veterans do have about military service, and I don't mean to question it, but apparently reading about it makes me grumbly. There's only so much of the apparently gruff sergeant with a heart of gold that I can take before I start wondering why we glorify hazing rituals as a type of tough love, or why the person with some authority doesn't put a direct stop to nastiness instead of providing moral support so subtle you could easily blink and miss it. Let alone the more basic problems like none of these people should have to be here doing this, or lots of people are being mangled and killed to make possible this heart-warming friendship. Like I said earlier, this is a me problem, not a Pratchett problem. He's writing a perfectly reasonable story in a genre I just happen to dislike. He's even undermining the genre in the process, just not quite fast enough or thoroughly enough for my taste. A related grumble is that Monstrous Regiment is very invested in the military trope of naive and somewhat incompetent officers who have to be led by the nose by experienced sergeants into making the right decision. I have never been in the military, but I work in an industry in which it is common to treat management as useless incompetents at best and actively malicious forces at worst. This is, to me, one of the most persistently obnoxious attitudes in my profession, and apparently my dislike of it carries over as a low tolerance for this very common attitude towards military hierarchy. A full expansion of this point would mostly be about the purpose of management, division of labor, and people's persistent dismissal of skills they don't personally have and may perceive as gendered, and while some of that is tangentially related to this book, it's not closely-related enough for me to bore you with it in a review. Maybe I'll write a stand-alone blog post someday. Suffice it to say that Pratchett deployed a common trope that most people would laugh at and read past without a second thought, but that for my own reasons started getting under my skin by the end of the novel. All of that grumbling aside, I did like this book. It is a very solid Discworld novel that does all the typical things a Discworld novel does: likable protagonists you can root for, odd and fascinating side characters, sharp and witty observations of human nature, and a mostly enjoyable ending where most of the right things happen. Polly is great; I was very happy to read a book from her perspective and would happily read more. Vimes makes a few appearances being Vimes, and while I found his approach in this book less satisfying than in Jingo, I'll still take it. And the examination of gender roles, even if a bit less fraught than current politics, is solid Pratchett morality. The best part of this book for me, by far, is Wazzer. I think that subplot was the most Discworld part of this book: a deeply devout belief in a pseudo-godlike figure that is part of the abusive social structure that creates many of the problems of the book becomes something considerably stranger and more wonderful. There is a type of belief that is so powerful that it transforms the target of that belief, at least in worlds like Discworld that have a lot of ambient magic. Not many people have that type of belief, and having it is not a comfortable experience, but it makes for a truly excellent story. Monstrous Regiment is a solid Discworld novel. It was not one of my favorites, but it probably will be someone else's favorite for a host of good reasons. Good stuff; if you've read this far, you will enjoy it. Followed by A Hat Full of Sky in publication order, and thematically (but very loosely) by Going Postal. Rating: 8 out of 10

Next.

Previous.